All Posts

Sorted by Magic (New & Upvoted)

February 2020

Shortform [Beta]
22Max_Daniel3d[Is longtermism bottlenecked by "great people"?] Someone very influential in EA recently claimed in conversation with me that there are many tasks X such that (i) we currently don't have anyone in the EA community who can do X, (ii) the bottleneck for this isn't credentials or experience or knowledge but person-internal talent, and (iii) it would be very valuable (specifically from a longtermist point of view) if we could do X. And that therefore what we most need in EA are more "great people". I find this extremely dubious. (In fact, it seems so crazy to me that it seems more likely than not that I significantly misunderstood the person who I think made these claims.) The first claim is of course vacuously true if, for X, we choose some ~impossible task such as "experience a utility-monster amount of pleasure" or "come up with a blueprint for how to build safe AGI that is convincing to benign actors able to execute it". But of course more great people don't help with solving impossible tasks. Given the size and talent distribution of the EA community my guess is that for most apparent X, the issue either is that (a) X is ~impossible, or (b) there are people in EA who could do X, but the relevant actors cannot identify them, (c) acquiring the ability to do X is costly (e.g. perhaps you need time to acquire domain-specific expertise), even for maximally talented "great people", and the relevant actors either are unable to help pay that cost (e.g. by training people themselves, or giving them the resources to allow them to get training elsewhere) or make a mistake by not doing so. My best guess for the genesis of the "we need more great people" perspective: Suppose I talk a lot to people at an organization that thinks there's a decent chance we'll develop transformative AI soon but it will go badly, and that as a consequence tries to grow as fast as possible to pursue various ambitious activities which they think reduces that risk. If these activities are scalable
8Aaron Gertler11dAnother brief note on usernames: Epistemic status: Moderately confident that this is mildly valuable It's totally fine to use a pseudonym on the Forum. However, if you chose a pseudonym for a reason other than "I actively want to not be identifiable" (e.g. "I copied over my Reddit username without giving it too much thought"), I recommend using your real name on the Forum. If you want to change your name, just PM or email me ( with your current username and the one you'd like to use. Reasons to do this: * Real names make easier for someone to track your writing/ideas across multiple platforms ("where have I seen this name before? Oh, yeah! I had a good Facebook exchange with them last year.") * There's a higher chance that people will recognize you at meetups, conferences, etc. This leads to more good conversations! * Aesthetically, I think it's nice if the Forum feels like an extension of the real world where people discuss ways to improve that world. Real names help with that. * "Joe, Sarah, and Vijay are discussing how to run a good conference" has a different feel than "fluttershy_forever, UtilityMonster, and AnonymousEA64 are discussing how to run a good conference". Some of these reasons won't apply if you have a well-known pseudonym you've used for a while, but I still think using a real name is worth considering.
7evelynciara12dI think improving bus systems in the United States [] (and probably other countries) could be a plausible Cause X. Importance: Improving bus service would: * Increase economic output in cities * Dramatically improve quality of life for low-income residents * Reduce cities' carbon footprint, air pollution, and traffic congestion Neglectedness: City buses probably don't get much attention because most people don't think very highly of them, and focus much more on novel transportation technologies like electric vehicles. Tractability: According to Higashide, improving bus systems is a matter of improving how the bus systems are governed. Right now, I think a nationwide movement to improve bus transit would be less polarizing than the YIMBY movement has been. While YIMBYism has earned a reputation as elitist due to some of its early advocates' mistakes, a pro-bus movement could be seen as aligned with the interests of low-income city dwellers provided that it gets the messaging right from the beginning. Also, bus systems are less costly to roll out, upgrade, and alter than other public transportation options like trains.
5Nathan Young13dDoes anyone know people working on reforming the academic publishing process? Coronavirus has caused journalists to look for scientific sources. There are no journal articles because of the lag time. So they have gone to preprint servers like bioRxiv (pronounced bio-archive). These servers are not peer reviewed so some articles are of low quality. So people have gone to twitter asking for experts to review the papers. [] This is effectively a new academic publishing paradigm. If there were support for good papers (somehow) you would have the key elements of a new, perhaps better system. Some thoughts here too: [] With Coronavirus providing a lot of impetus for change, those working in this area could find this an important time to increase visibility of their work.
4EdoArad8dMIT has a new master's program on Development Economics. [] It is taught by Esther Duflo and Abhijit Banerjee, the recent Nobel Laureates. Seems cool :)
Load More (5/11)

January 2020

Shortform [Beta]
23Pablo_Stafforini1moI don't know if there is a designated place to leave comments about the EA Forum, so for the time being I'm posting them here. I think the current homepage [] has a number of problems: * The 'Community Favorites' section keeps listing the same posts over and over again. I don't see the point of having a prominent list of favorite posts in the home page that changes so little. I suggest expanding the list considerably so that regular visitors can still expect to see novel posts every time they visit the homepage. * [Note: in light of Oli's comment [] below, I'm retracting this bullet point.] The 'Latest Posts' section sorts posts neither by karma nor by date; rather, it seems to rely on a hybrid sorting algorithm. I don't think this is useful: as someone who checks the home page regularly, I want to be able to easily see what the latest posts are, so that when I go down the list and eventually come across a post I have already seen, I can conclude that I have seen all posts after it as well. With the current sorting algorithm, there's no way for me to insure that my browsing session has exhausted all the posts seen since the previous session. * I find it hard to understand the meaning of the 'Community' category. The description says that it consists of "posts with topical content or which relate to the EA community itself". But that description also draws a contrast to "Frontpage posts, which are selected by moderators as especially interesting or useful to people with interest in doing good effectively." This contrast suggests that 'community' posts are simply posts that haven't been curated, as opposed to posts with a focus on the EA community. In other words, there are two separate distinctions here: that between curated vs. non-curated posts, and that betw
16Linch18dcross-posted from Facebook. Reading Bryan Caplan and Zach Weinersmith's new book has made me somewhat more skeptical about Open Borders (from a high prior belief in its value). Before reading the book, I was already aware of the core arguments (eg, Michael Huemer's right to immigrate, basic cosmopolitanism, some vague economic stuff about doubling GDP). I was hoping the book will have more arguments, or stronger versions of the arguments I'm familiar with. It mostly did not. The book did convince me that the prima facie case for open borders was stronger than I thought. In particular, the section where he argued that a bunch of different normative ethical theories should all-else-equal lead to open borders was moderately convincing. I think it will have updated me towards open borders if I believed in stronger "weight all mainstream ethical theories equally" moral uncertainty, or if I previously had a strong belief in a moral theory that I previously believed was against open borders. However, I already fairly strongly subscribe to cosmopolitan utilitarianism and see no problem with aggregating utility across borders. Most of my concerns with open borders are related to Chesterton's fence, and Caplan's counterarguments were in three forms: 1. Doubling GDP is so massive that it should override any conservativism prior. 2. The US historically had Open Borders (pre-1900) and it did fine. 3. On the margin, increasing immigration in all the American data Caplan looked at didn't seem to have catastrophic cultural/institutional effects that naysayers claim. I find this insufficiently persuasive. ___ Let me outline the strongest case I'm aware of against open borders: Countries are mostly not rich and stable because of the physical resources, or because of the arbitrary nature of national boundaries. They're rich because of institutions and good governance. (I think this is a fairly mainstream belief among political economists). These institutions are, again, ev
13Khorton1moWho should pay the cost of Googling studies on the EA Forum? 1. Many EA Forum posts have minimal engagement with relevant academic literature 2. If you see a Forum post that doesn't engage with literature you think is relevant, you could make a claim without looking up a citation based on your memory, but there's a reasonable chance you'll be wrong. 3. Many people say they'd rather see an imperfect post or comment than not have it at all. 4. But people tend to remember an original claim, even if it's later debunked. 5. Maybe the best option is to phrase my comment as a question: "Have you looked at the literature on X?"
11Khorton1moI'm 60% sure that LessWrong people use the term "Moloch" in almost exactly the same way as social justice people use the term "kyriarchy" (or "capitalist cis-hetero patriarchy"). I might program my browser to replace "Moloch" with "kyriarchy". Might make Christian Twitter confusing though.
10Khorton1moThere are some pretty good reasons to keep your identity small. [] But I see people using that as an excuse to not identify as... anything. As in, they avoid affiliating themselves with any social movements, sports teams, schools, nation-states, professions, etc. It can be annoying and confusing when you ask someone "are you an EA?" or "are you a Christian?" or "are you British?" and they won't give you a straight answer. It's partly annoying because I'm very rationally trying to make some shortcut assumptions about them (if they're an EA, they've probably heard of malaria) and they're preventing me from doing that. But I also sometimes get the sense that they're trying to protect themselves by not affiliating with a movement, and I find that a bit annoying. I feel like they're a free rider. What are they trying to protect themselves from? Effectively they're protecting their reputation. This could be from an existing negative legacy of the group. eg If they don't identify as British (even though they're a British citizen) maybe they can dodge questions about the ongoing negative effects of the British empire. They could also be hedging against future negative reputation eg If I call myself an EA but then someone attempts a military coup in the name of EA, I would look bad. By avoiding declaring yourself a group member, you can sometimes avoid your reputation sinking when your chosen group makes bad choices. Unfortunately, that means that those of us with our reputations on the line are the ones who have the most skin in the game to keep people from doing stupid unilateralist things that make everyone in the community look bad. I would prefer it if people would take that big scary step of saying they're an EA or Christian or Brit or whatever, and then put in the work to improve your community's reputation. Obviously open to hearing reasons why people shouldn't identify as members o
Load More (5/20)

December 2019

Frontpage Posts
Personal Blogposts
Shortform [Beta]
50Max_Daniel2mo[Some of my high-level views on AI risk.] [I wrote this for an application a couple of weeks ago, but thought I might as well dump it here in case someone was interested in my views. / It might sometimes be useful to be able to link to this.] [In this post I generally state what I think ​before ​updating on other people’s views – i.e., what’s ​sometimes known as​ ‘impressions’ as opposed to ‘beliefs.’ [] ] Summary * Transformative AI (TAI) – the prospect of AI having impacts at least as consequential as the Industrial Revolution – would plausibly (~40%) be our best lever for influencing the long-term future if it happened this century, which I consider to be unlikely (~20%) but worth betting on. * The value of TAI depends not just on the technological options available to individual actors, but also on the incentives governing the strategic interdependence between actors. Policy could affect both the amount and quality of technical safety research and the ‘rules of the game’ under which interactions between actors will play out. Why I'm interested in TAI as a lever to improve the long-run future I expect my perspective to be typical of someone who has become interested in TAI through their engagement with the effective altruism (EA) community. In particular, * My overarching interest is to make the lives of as many moral patients as possible to go as well as possible, no matter where or when they live; and * I think that in the world we find ourselves in – it could have been otherwise –, this goal entails strong longtermism [],​ i.e. the claim that “the primary determinant of the value of our actions today is how those actions affect the very long-term future.” Less standard but not highly unusual (within EA) high-level views I hold mo
34Max_Daniel2moWhat's the right narrative about global poverty and progress? Link dump of a recent debate. The two opposing views are: (a) "New optimism:" [1] This is broadly the view that, over the last couple of hundred years, the world has been getting significantly better, and that's great . [2] In particular, extreme poverty has declined dramatically, and most other welfare-relevant indicators have improved a lot. Often, these effects are largely attributed to economic growth. * Proponents in this debate were originally Bill Gates, Steven Pinker, and Max Roser. But my loose impression is that the view is shared much more widely. * In particular, it seems to be the orthodox view in EA; cf. e.g. Muehlhauser listing one of Pinker's books in his My worldview in 5 books [] post, saying that "Almost everything has gotten dramatically better for humans over the past few centuries, likely substantially due to the spread and application of reason, science, and humanism." (b) Hickel's critique: Anthropologist Jason Hickel has criticized new optimism on two grounds: * 1. Hickel has questioned the validity of some of the core data used by new optimists, claiming e.g. that "real data on poverty has only been collected since 1981. Anything before that is extremely sketchy, and to go back as far as 1820 is meaningless." * 2. Hickel prefers to look at different indicators than the new optimists. For example, he has argued for different operationalizations of extreme poverty or inequality. Link dump (not necessarily comprehensive) If you only read two things, I'd recommend (1) Hasell's and Roser's article [] explaining where the data on historic poverty comes from and (2) the take by economic historian Branko Milanovic [] . By Hickel (i.e. against "
15Wei_Dai2moA post that I wrote on LW that is also relevant to EA: What determines the balance between intelligence signaling and virtue signaling? []
12Khorton2moQuestion to look into later: How has the EA community affected the charities it has donated to over the past decade?
7Ramiro2moWhy don't we have more advices / mentions about donating through a last will - like Effective Legacy []? Is it too obvious? Or absurd? All other cases of someone discussing charity & wills were about the dilemma "give now vs. (invest) post mortem". But we can expect that even GWWC pledgers save something for retirement or emergency; so why not to legate a part of it to the most effective charities, too? Besides, this may attract non-pledgers equally: even if you're not willing to sacrifice a portion of your consumption for the sake of the greater good, why not those savings for retirement, in case you die before spending it all? Of course, I'm not saying this would be super-effective; but it might be a low-hanging fruit. Has anyone explored this "path"?
Load More (5/16)

November 2019

Shortform [Beta]
21JP Addison3moThe new Forum turns 1 year old today. 🎵Happy Birthday to us 🎶
16EdoArad3moAMF's cost of nets is decreasing over time due to economies of scale and competition between net manufacturers. []
8Davidmanheim3moPrecommitting based on Survey Outcomes for Proposed "Giving Effectively Israel. (Epistemic Status: Public Statement for Future Reference) We're planning to field a survey about current giving, and potentially funding a Israeli-tax-deductible organization. The first question is whether there is sufficient demand to make funding the program worthwhile. Setting this up involves a fair amount of upfront costs for lawyers to ensure that this is entirely above board. It seems worthwhile to try to engage a relatively prestigious / respected firm, to ensure that this is done correctly. There is a risk that we find out that they don't think it's possible to do (subjective estimate: 25%), in which case we would stop the project, hopefully having spent less than the expected full cost. My upfront claim is that this would be worthwhile to seek funding for if the cost of a lawyer and setting up the nonprofit is less than 25% of the expected tax-saving to EAs over the next 3 years, as inferred from the survey.
8EdoArad3moHow about an option to transfer Karma directly to posts/comments? Perhaps to have the transfer be public (part of the information of the karma of the comment). This may allow some interesting "trades" such as giving prizes for answers (say, like in stackexchange) or have people display more strongly support for a comment. Damn.. As stated, when people can pay to put Karma in posts, there is a problematic "attack" against it. left as an exercise :) I still think that Karma transfer between people and prizes on comments/posts can be very interesting
7EdoArad3moStatisticians Without Borders [] is a volunteer Outreach Group of the American Statistical Association that provides pro bono services in statistics and data science. Their focus is mostly on developing countries. They have about 800 Volunteers. Their Executive Committee consists of volunteers democratically elected from within the volunteer community every two years.
Load More (5/14)

Load More Months