All Posts

Sorted by Magic (New & Upvoted)

2020

Frontpage Posts
Shortform [Beta]
23Pablo_Stafforini18d I don't know if there is a designated place to leave comments about the EA Forum, so for the time being I'm posting them here. I think the current homepage [https://forum.effectivealtruism.org/] has a number of problems: * The 'Community Favorites' section keeps listing the same posts over and over again. I don't see the point of having a prominent list of favorite posts in the home page that changes so little. I suggest expanding the list considerably so that regular visitors can still expect to see novel posts every time they visit the homepage. * [Note: in light of Oli's comment [https://forum.effectivealtruism.org/posts/HfSfZ2ekXaryadBrB/pablo_stafforini-s-shortform#JwhC4dJGvAaiz2ZSB] below, I'm retracting this bullet point.] The 'Latest Posts' section sorts posts neither by karma nor by date; rather, it seems to rely on a hybrid sorting algorithm. I don't think this is useful: as someone who checks the home page regularly, I want to be able to easily see what the latest posts are, so that when I go down the list and eventually come across a post I have already seen, I can conclude that I have seen all posts after it as well. With the current sorting algorithm, there's no way for me to insure that my browsing session has exhausted all the posts seen since the previous session. * I find it hard to understand the meaning of the 'Community' category. The description says that it consists of "posts with topical content or which relate to the EA community itself". But that description also draws a contrast to "Frontpage posts, which are selected by moderators as especially interesting or useful to people with interest in doing good effectively." This contrast suggests that 'community' posts are simply posts that haven't been curated, as opposed to posts with a focus on the EA community. In other words, there are two separate distinctions here: that between curated vs. non-curated posts, and that betw
11Khorton6d Who should pay the cost of Googling studies on the EA Forum? 1. Many EA Forum posts have minimal engagement with relevant academic literature 2. If you see a Forum post that doesn't engage with literature you think is relevant, you could make a claim without looking up a citation based on your memory, but there's a reasonable chance you'll be wrong. 3. Many people say they'd rather see an imperfect post or comment than not have it at all. 4. But people tend to remember an original claim, even if it's later debunked. 5. Maybe the best option is to phrase my comment as a question: "Have you looked at the literature on X?"
10Khorton12d There are some pretty good reasons to keep your identity small. http://www.paulgraham.com/identity.html [http://www.paulgraham.com/identity.html] But I see people using that as an excuse to not identify as... anything. As in, they avoid affiliating themselves with any social movements, sports teams, schools, nation-states, professions, etc. It can be annoying and confusing when you ask someone "are you an EA?" or "are you a Christian?" or "are you British?" and they won't give you a straight answer. It's partly annoying because I'm very rationally trying to make some shortcut assumptions about them (if they're an EA, they've probably heard of malaria) and they're preventing me from doing that. But I also sometimes get the sense that they're trying to protect themselves by not affiliating with a movement, and I find that a bit annoying. I feel like they're a free rider. What are they trying to protect themselves from? Effectively they're protecting their reputation. This could be from an existing negative legacy of the group. eg If they don't identify as British (even though they're a British citizen) maybe they can dodge questions about the ongoing negative effects of the British empire. They could also be hedging against future negative reputation eg If I call myself an EA but then someone attempts a military coup in the name of EA, I would look bad. By avoiding declaring yourself a group member, you can sometimes avoid your reputation sinking when your chosen group makes bad choices. Unfortunately, that means that those of us with our reputations on the line are the ones who have the most skin in the game to keep people from doing stupid unilateralist things that make everyone in the community look bad. I would prefer it if people would take that big scary step of saying they're an EA or Christian or Brit or whatever, and then put in the work to improve your community's reputation. Obviously open to hearing reasons why people shouldn't identify as members o
9Linch10d I find the unilateralist’s curse [https://www.nickbostrom.com/papers/unilateralist.pdf] a particularly valuable concept to think about. However, I now worry that “unilateralist” is an easy label to tack on, and whether a particular action is unilateralist or not is susceptible to small changes in framing. Consider the following hypothetical situations: 1. Company policy vs. team discretion 2. Alice is a researcher in a team of scientists at a large biomedical company. While working on the development of an HIV vaccine, the team accidentally created an air-transmissible variant of HIV. The scientists must decide whether to publish their discovery with the rest of the company, knowing that leaks may exist, and the knowledge may be used to create a devastating biological weapon, but also that it could help those who hope to develop defenses against such weapons, including other teams within the same company. Most of the team thinks they should keep it quiet, but company policy is strict that such information must be shared with the rest of the company to maintain the culture of open collaboration. Alice thinks the rest of the team should either share this information or quit. Eventually, she tells her skip manager her concerns, who relayed it to the rest of the company in a company-open document. Alice does not know if this information ever leaked past the company. 3. Stan and the bomb 4. Stan is an officer in charge of overseeing a new early warning system intended to detect (nuclear) intercontinental ballistic missiles from an enemy country. A warning system appeared to have detected five missiles heading towards his homeland, quickly going through 30 early layers of verification. Stan suspects this is a false alarm, but is not sure. Military instructions are clear that such warnings must immediately be relayed upwards.Stan decided not to relay the message to his superiors, on the g
9Khorton12d I'm 60% sure that LessWrong people use the term "Moloch" in almost exactly the same way as social justice people use the term "kyriarchy" (or "capitalist cis-hetero patriarchy"). I might program my browser to replace "Moloch" with "kyriarchy". Might make Christian Twitter confusing though.
Load More (5/15)

2019

Frontpage Posts
Personal Blogposts
Shortform [Beta]
49Max_Daniel1mo [Some of my high-level views on AI risk.] [I wrote this for an application a couple of weeks ago, but thought I might as well dump it here in case someone was interested in my views. / It might sometimes be useful to be able to link to this.] [In this post I generally state what I think ​before ​updating on other people’s views – i.e., what’s ​sometimes known as​ ‘impressions’ as opposed to ‘beliefs.’ [https://forum.effectivealtruism.org/posts/WKPd79PESRGZHQ5GY/in-defence-of-epistemic-modesty%23cubpmCn7XJE5FQYEq] ] Summary * Transformative AI (TAI) – the prospect of AI having impacts at least as consequential as the Industrial Revolution – would plausibly (~40%) be our best lever for influencing the long-term future if it happened this century, which I consider to be unlikely (~20%) but worth betting on. * The value of TAI depends not just on the technological options available to individual actors, but also on the incentives governing the strategic interdependence between actors. Policy could affect both the amount and quality of technical safety research and the ‘rules of the game’ under which interactions between actors will play out. Why I'm interested in TAI as a lever to improve the long-run future I expect my perspective to be typical of someone who has become interested in TAI through their engagement with the effective altruism (EA) community. In particular, * My overarching interest is to make the lives of as many moral patients as possible to go as well as possible, no matter where or when they live; and * I think that in the world we find ourselves in – it could have been otherwise –, this goal entails strong longtermism [https://forum.effectivealtruism.org/posts/qZyshHCNkjs3TvSem/longtermism],​ i.e. the claim that “the primary determinant of the value of our actions today is how those actions affect the very long-term future.” Less standard but not highly unusual (within EA) high-level views I hold mo
49jpaddison5mo Appreciation post for Saulius I realized recently that the same author [https://forum.effectivealtruism.org/users/saulius] that made the corporate commitments [https://forum.effectivealtruism.org/posts/XdekdWJWkkhur9gvr/will-companies-meet-their-animal-welfare-commitments] post and the misleading cost effectiveness post [https://forum.effectivealtruism.org/posts/zdAst6ezi45cChRi6/list-of-ways-in-which-cost-effectiveness-estimates-can-be] also made all three of these excellent posts on neglected animal welfare concerns that I remembered reading. Fish used as live bait by recreational fishermen [https://forum.effectivealtruism.org/posts/gGiiktK69R2YY7FfG/fish-used-as-live-bait-by-recreational-fishermen] Rodents farmed for pet snake food [https://forum.effectivealtruism.org/posts/pGwR2xc39PMSPa6qv/rodents-farmed-for-pet-snake-food] 35-150 billion fish are raised in captivity to be released into the wild every year [https://forum.effectivealtruism.org/posts/4FSANaX3GvKHnTgbw/35-150-billion-fish-are-raised-in-captivity-to-be-released] For the first he got this notable comment [https://forum.effectivealtruism.org/posts/gGiiktK69R2YY7FfG/fish-used-as-live-bait-by-recreational-fishermen#FfySjSzLL8YFZpih5] from OpenPhil's Lewis Bollard. Honorable mention includes this post [https://forum.effectivealtruism.org/posts/SMRHnGXirRNpvB8LJ/fact-checking-comparison-between-trachoma-surgeries-and] which I also remembered, doing good epistemic work fact-checking a commonly cited comparison.
42Stefan_Schubert3mo The Nobel Prize in Economics [https://www.nobelprize.org/prizes/economic-sciences/2019/summary/] awarded to Abhijit Banerjee, Esther Duflo and Michael Kremer "for their experimental approach to alleviating global poverty".
36Raemon7mo Mid-level EA communities, and cultivating the skill of thinking I think a big problem for EA is not having a clear sense of what mid-level EAs are supposed to do. Once you've read all the introductory content, but before you're ready to tackle anything real ambitious... what should you do, and what should your local EA community encourage people to do? My sense is that grassroots EA groups default to "discuss the basics; recruit people to give money to givewell-esque charities and sometimes weirder things; occasionally run EAGx conferences; give people guidance to help them on their career trajectory." I have varying opinions on those things, but even if they were all good ideas... they leave an unsolved problem where there isn't a very good "bread and butter" activity that you can do repeatedly, that continues to be interesting after you've learned the basics. My current best guess (admittedly untested) is that Mid-Level EAs and Mid-Level EA Communities should focus on practicing thinking. And a corresponding bottleneck is something like "figuring out how to repeatedly have things that are worth thinking about, that are important enough to try hard on, but where it's okay if to not do a very good job because you're still learning." I have some preliminary thoughts on how to go about this. Two hypotheses that seem interesting are: * LW/EA-Forum Question Answering hackathons (where you pick a currently open question, and try to solve it as best you can. This might be via literature reviews, or first principles thinking * Updating the Cause Prioritization wiki (either this one [https://causeprioritization.org/Forecasting] or this one [https://priority.wiki/], I'm not sure if either one of them has become the schelling-one), and meanwhile posting those updates as EA Forum blogposts. I'm interested in chatting with local community organizers about it, and with established researchers that have ideas about how to make this the most productive vers
34Max_Daniel1mo What's the right narrative about global poverty and progress? Link dump of a recent debate. The two opposing views are: (a) "New optimism:" [1] This is broadly the view that, over the last couple of hundred years, the world has been getting significantly better, and that's great . [2] In particular, extreme poverty has declined dramatically, and most other welfare-relevant indicators have improved a lot. Often, these effects are largely attributed to economic growth. * Proponents in this debate were originally Bill Gates, Steven Pinker, and Max Roser. But my loose impression is that the view is shared much more widely. * In particular, it seems to be the orthodox view in EA; cf. e.g. Muehlhauser listing one of Pinker's books in his My worldview in 5 books [https://lukemuehlhauser.com/my-worldview-in-5-books/] post, saying that "Almost everything has gotten dramatically better for humans over the past few centuries, likely substantially due to the spread and application of reason, science, and humanism." (b) Hickel's critique: Anthropologist Jason Hickel has criticized new optimism on two grounds: * 1. Hickel has questioned the validity of some of the core data used by new optimists, claiming e.g. that "real data on poverty has only been collected since 1981. Anything before that is extremely sketchy, and to go back as far as 1820 is meaningless." * 2. Hickel prefers to look at different indicators than the new optimists. For example, he has argued for different operationalizations of extreme poverty or inequality. Link dump (not necessarily comprehensive) If you only read two things, I'd recommend (1) Hasell's and Roser's article [https://ourworldindata.org/extreme-history-methods] explaining where the data on historic poverty comes from and (2) the take by economic historian Branko Milanovic [https://www.globalpolicyjournal.com/blog/11/02/2019/global-poverty-over-long-term-legitimate-issues] . By Hickel (i.e. against "
Load More (5/82)

Load More Years