All Posts

Sorted by Magic (New & Upvoted)

January 2020

Frontpage Posts
Shortform [Beta]
23Pablo_Stafforini18d I don't know if there is a designated place to leave comments about the EA Forum, so for the time being I'm posting them here. I think the current homepage [https://forum.effectivealtruism.org/] has a number of problems: * The 'Community Favorites' section keeps listing the same posts over and over again. I don't see the point of having a prominent list of favorite posts in the home page that changes so little. I suggest expanding the list considerably so that regular visitors can still expect to see novel posts every time they visit the homepage. * [Note: in light of Oli's comment [https://forum.effectivealtruism.org/posts/HfSfZ2ekXaryadBrB/pablo_stafforini-s-shortform#JwhC4dJGvAaiz2ZSB] below, I'm retracting this bullet point.] The 'Latest Posts' section sorts posts neither by karma nor by date; rather, it seems to rely on a hybrid sorting algorithm. I don't think this is useful: as someone who checks the home page regularly, I want to be able to easily see what the latest posts are, so that when I go down the list and eventually come across a post I have already seen, I can conclude that I have seen all posts after it as well. With the current sorting algorithm, there's no way for me to insure that my browsing session has exhausted all the posts seen since the previous session. * I find it hard to understand the meaning of the 'Community' category. The description says that it consists of "posts with topical content or which relate to the EA community itself". But that description also draws a contrast to "Frontpage posts, which are selected by moderators as especially interesting or useful to people with interest in doing good effectively." This contrast suggests that 'community' posts are simply posts that haven't been curated, as opposed to posts with a focus on the EA community. In other words, there are two separate distinctions here: that between curated vs. non-curated posts, and that betw
11Khorton6d Who should pay the cost of Googling studies on the EA Forum? 1. Many EA Forum posts have minimal engagement with relevant academic literature 2. If you see a Forum post that doesn't engage with literature you think is relevant, you could make a claim without looking up a citation based on your memory, but there's a reasonable chance you'll be wrong. 3. Many people say they'd rather see an imperfect post or comment than not have it at all. 4. But people tend to remember an original claim, even if it's later debunked. 5. Maybe the best option is to phrase my comment as a question: "Have you looked at the literature on X?"
10Khorton12d There are some pretty good reasons to keep your identity small. http://www.paulgraham.com/identity.html [http://www.paulgraham.com/identity.html] But I see people using that as an excuse to not identify as... anything. As in, they avoid affiliating themselves with any social movements, sports teams, schools, nation-states, professions, etc. It can be annoying and confusing when you ask someone "are you an EA?" or "are you a Christian?" or "are you British?" and they won't give you a straight answer. It's partly annoying because I'm very rationally trying to make some shortcut assumptions about them (if they're an EA, they've probably heard of malaria) and they're preventing me from doing that. But I also sometimes get the sense that they're trying to protect themselves by not affiliating with a movement, and I find that a bit annoying. I feel like they're a free rider. What are they trying to protect themselves from? Effectively they're protecting their reputation. This could be from an existing negative legacy of the group. eg If they don't identify as British (even though they're a British citizen) maybe they can dodge questions about the ongoing negative effects of the British empire. They could also be hedging against future negative reputation eg If I call myself an EA but then someone attempts a military coup in the name of EA, I would look bad. By avoiding declaring yourself a group member, you can sometimes avoid your reputation sinking when your chosen group makes bad choices. Unfortunately, that means that those of us with our reputations on the line are the ones who have the most skin in the game to keep people from doing stupid unilateralist things that make everyone in the community look bad. I would prefer it if people would take that big scary step of saying they're an EA or Christian or Brit or whatever, and then put in the work to improve your community's reputation. Obviously open to hearing reasons why people shouldn't identify as members o
9Linch10d I find the unilateralist’s curse [https://www.nickbostrom.com/papers/unilateralist.pdf] a particularly valuable concept to think about. However, I now worry that “unilateralist” is an easy label to tack on, and whether a particular action is unilateralist or not is susceptible to small changes in framing. Consider the following hypothetical situations: 1. Company policy vs. team discretion 2. Alice is a researcher in a team of scientists at a large biomedical company. While working on the development of an HIV vaccine, the team accidentally created an air-transmissible variant of HIV. The scientists must decide whether to publish their discovery with the rest of the company, knowing that leaks may exist, and the knowledge may be used to create a devastating biological weapon, but also that it could help those who hope to develop defenses against such weapons, including other teams within the same company. Most of the team thinks they should keep it quiet, but company policy is strict that such information must be shared with the rest of the company to maintain the culture of open collaboration. Alice thinks the rest of the team should either share this information or quit. Eventually, she tells her skip manager her concerns, who relayed it to the rest of the company in a company-open document. Alice does not know if this information ever leaked past the company. 3. Stan and the bomb 4. Stan is an officer in charge of overseeing a new early warning system intended to detect (nuclear) intercontinental ballistic missiles from an enemy country. A warning system appeared to have detected five missiles heading towards his homeland, quickly going through 30 early layers of verification. Stan suspects this is a false alarm, but is not sure. Military instructions are clear that such warnings must immediately be relayed upwards.Stan decided not to relay the message to his superiors, on the g
9Khorton12d I'm 60% sure that LessWrong people use the term "Moloch" in almost exactly the same way as social justice people use the term "kyriarchy" (or "capitalist cis-hetero patriarchy"). I might program my browser to replace "Moloch" with "kyriarchy". Might make Christian Twitter confusing though.
Load More (5/15)

December 2019

Frontpage Posts
Personal Blogposts
Shortform [Beta]
49Max_Daniel1mo [Some of my high-level views on AI risk.] [I wrote this for an application a couple of weeks ago, but thought I might as well dump it here in case someone was interested in my views. / It might sometimes be useful to be able to link to this.] [In this post I generally state what I think ​before ​updating on other people’s views – i.e., what’s ​sometimes known as​ ‘impressions’ as opposed to ‘beliefs.’ [https://forum.effectivealtruism.org/posts/WKPd79PESRGZHQ5GY/in-defence-of-epistemic-modesty%23cubpmCn7XJE5FQYEq] ] Summary * Transformative AI (TAI) – the prospect of AI having impacts at least as consequential as the Industrial Revolution – would plausibly (~40%) be our best lever for influencing the long-term future if it happened this century, which I consider to be unlikely (~20%) but worth betting on. * The value of TAI depends not just on the technological options available to individual actors, but also on the incentives governing the strategic interdependence between actors. Policy could affect both the amount and quality of technical safety research and the ‘rules of the game’ under which interactions between actors will play out. Why I'm interested in TAI as a lever to improve the long-run future I expect my perspective to be typical of someone who has become interested in TAI through their engagement with the effective altruism (EA) community. In particular, * My overarching interest is to make the lives of as many moral patients as possible to go as well as possible, no matter where or when they live; and * I think that in the world we find ourselves in – it could have been otherwise –, this goal entails strong longtermism [https://forum.effectivealtruism.org/posts/qZyshHCNkjs3TvSem/longtermism],​ i.e. the claim that “the primary determinant of the value of our actions today is how those actions affect the very long-term future.” Less standard but not highly unusual (within EA) high-level views I hold mo
34Max_Daniel1mo What's the right narrative about global poverty and progress? Link dump of a recent debate. The two opposing views are: (a) "New optimism:" [1] This is broadly the view that, over the last couple of hundred years, the world has been getting significantly better, and that's great . [2] In particular, extreme poverty has declined dramatically, and most other welfare-relevant indicators have improved a lot. Often, these effects are largely attributed to economic growth. * Proponents in this debate were originally Bill Gates, Steven Pinker, and Max Roser. But my loose impression is that the view is shared much more widely. * In particular, it seems to be the orthodox view in EA; cf. e.g. Muehlhauser listing one of Pinker's books in his My worldview in 5 books [https://lukemuehlhauser.com/my-worldview-in-5-books/] post, saying that "Almost everything has gotten dramatically better for humans over the past few centuries, likely substantially due to the spread and application of reason, science, and humanism." (b) Hickel's critique: Anthropologist Jason Hickel has criticized new optimism on two grounds: * 1. Hickel has questioned the validity of some of the core data used by new optimists, claiming e.g. that "real data on poverty has only been collected since 1981. Anything before that is extremely sketchy, and to go back as far as 1820 is meaningless." * 2. Hickel prefers to look at different indicators than the new optimists. For example, he has argued for different operationalizations of extreme poverty or inequality. Link dump (not necessarily comprehensive) If you only read two things, I'd recommend (1) Hasell's and Roser's article [https://ourworldindata.org/extreme-history-methods] explaining where the data on historic poverty comes from and (2) the take by economic historian Branko Milanovic [https://www.globalpolicyjournal.com/blog/11/02/2019/global-poverty-over-long-term-legitimate-issues] . By Hickel (i.e. against "
15Wei_Dai2mo A post that I wrote on LW that is also relevant to EA: What determines the balance between intelligence signaling and virtue signaling? [https://www.greaterwrong.com/posts/vA2Gd2PQjNk68ngFu/what-determines-the-balance-between-intelligence-signaling]
12Khorton1mo Question to look into later: How has the EA community affected the charities it has donated to over the past decade?
7Ramiro1mo Why don't we have more advices / mentions about donating through a last will - like Effective Legacy [https://www.charityscience.com/create-a-last-will.html]? Is it too obvious? Or absurd? All other cases of someone discussing charity & wills were about the dilemma "give now vs. (invest) post mortem". But we can expect that even GWWC pledgers save something for retirement or emergency; so why not to legate a part of it to the most effective charities, too? Besides, this may attract non-pledgers equally: even if you're not willing to sacrifice a portion of your consumption for the sake of the greater good, why not those savings for retirement, in case you die before spending it all? Of course, I'm not saying this would be super-effective; but it might be a low-hanging fruit. Has anyone explored this "path"?
Load More (5/16)

November 2019

Shortform [Beta]
20jpaddison3mo The new Forum turns 1 year old today. 🎵Happy Birthday to us 🎶
16EdoArad2mo AMF's cost of nets is decreasing over time due to economies of scale and competition between net manufacturers. https://www.againstmalaria.com/DollarsPerNet.aspx [https://www.againstmalaria.com/DollarsPerNet.aspx]
8EdoArad2mo How about an option to transfer Karma directly to posts/comments? Perhaps to have the transfer be public (part of the information of the karma of the comment). This may allow some interesting "trades" such as giving prizes for answers (say, like in stackexchange) or have people display more strongly support for a comment. Damn.. As stated, when people can pay to put Karma in posts, there is a problematic "attack" against it. left as an exercise :) I still think that Karma transfer between people and prizes on comments/posts can be very interesting
7EdoArad2mo Statisticians Without Borders [https://swb.wildapricot.org/about_us] is a volunteer Outreach Group of the American Statistical Association that provides pro bono services in statistics and data science. Their focus is mostly on developing countries. They have about 800 Volunteers. Their Executive Committee consists of volunteers democratically elected from within the volunteer community every two years.
6aarongertler2mo Quick PSA: If you have an ad-blocking extension turned on while you browse the Forum, it very likely means that your views aren't showing up in our Google Analytics data. That's not something we care too much about, but it does make our ideas about how many users the Forum has, and what they like to read, slightly less accurate. Consider turning off your adblocker for our domain if you'd like to do us a tiny favor.
Load More (5/14)

October 2019

Frontpage Posts
Shortform [Beta]
42Stefan_Schubert3mo The Nobel Prize in Economics [https://www.nobelprize.org/prizes/economic-sciences/2019/summary/] awarded to Abhijit Banerjee, Esther Duflo and Michael Kremer "for their experimental approach to alleviating global poverty".
14Stefan_Schubert3mo Link [https://schwitzsplinters.blogspot.com/2019/10/philosophy-contest-write-philosophical.html]
13Stefan_Schubert3mo Of possible interest regarding the efficiency of science: paper [https://journals.plos.org/plosone/article?id=10.1371%2Fjournal.pone.0223116&fbclid=IwAR0fvF3obK8i1hRd8sVKwYd5HAJGbnqbSeyrtEwhTU9xywIRFQb3py7jZiY] finds that scientists on average spend 52 hours per year formatting papers. ( Times Higher Education write-up [https://www.timeshighereducation.com/news/academics-lose-aweek-ayear-formatting-journal-papers] ; extensive excerpts here [https://www.facebook.com/stefan.schubert.3954/posts/1218205841713137] if you don't have access.)
13jpaddison4mo Thus starts the most embarrassing post-mortem I've ever written. The EA Forum went down for 5 minutes today. My sincere apologies to anyone who's Forum activity was interrupted. I was first alerted by Pingdom [https://www.pingdom.com/], which I am very glad we set up. I immediately knew what was wrong. I had just hit "Stop" on the (long unused and just archived) CEA Staff Forum, which we built as a test of the technology. Except I actually hit stop on the EA Forum itself. I turned it back on and it took a long minute or two, but was soon back up. ... Lessons learned: * I've seen sites that, after pressing the big red button that says "Delete", makes you enter the name of the service / repository / etc. you want to delete. I like those, but did not think of porting it to sites without that feature. I think I should install a TAP [https://www.lesswrong.com/posts/wJutA2czyFg6HbYoW/what-are-trigger-action-plans-taps] that whenever I hit a big red button, I confirm the name of the service I am stopping. * The speed of the fix leaned heavily on the fact that Pingdom was set up. But it doesn't catch everything. In case it doesn't catch something, I just changed it so that anyone can email me with "urgent" in the subject line and I will get notified on my phone, even if it is on silent. My email is jp at organizationwebsite [https://www.centreforeffectivealtruism.org].
7Ramiro3mo Why don't we have an "Effective App"? See, e.g., Ribon [https://home.ribon.io/english/] - an app that gives you points (“ribons”) for reading positive news (e.g. “handicapped walks again thanks to exoskeleton”) sponsored by corporations; then you choose one of the TLYCS charities, and your points are converted into a donation. Ribon is a Brazilian for-profit; they claim to donate 70% [http://blog.ribon.io/2019/09/03/conheca-o-caminho-do-dinheiro-na-ribon/] of what they receive from sponsors, but I haven’t found precise stats. It has skyrocketed [http://blog.ribon.io/2019/08/19/comprovante-de-doacoes-%ef%bd%9c-abril-e-maio-de-2019/] this year: from their informed impact, I estimate they have donated about U$ 33k to TLYCS – which is a lot for Brazilian standards. They intend to expand (they gathered more than R$ 1 mi – roughly U$250k - from investors [https://www.startse.com/noticia/startups/60773/startup-de-doacoes-ribon-abre-nova-captacao-apos-aporte-de-r-1-milhao] this year) and will soon launch an ICO. Perhaps an EA non-profit could do even more good?
Load More (5/19)

Load More Months