All Posts

Sorted by Magic (New & Upvoted)

Monday, July 13th 2020
Mon, Jul 13th 2020

4richard_ngo18hOne use case of the EA forum which we may not be focusing on enough: There are some very influential people who are aware of and somewhat interested in EA. Suppose one of those people checks in on the EA forum every couple of months. Would they be able to find content which is interesting, relevant, and causes them to have a higher opinion of EA? Or if not, what other mechanisms might promote the best EA content to their attention? The "Forum Favourites" partly plays this role, I guess. Although because it's forum regulars who are most likely to highly upvote posts, I wonder whether there's some divergence between what's most valuable for them and what's most valuable for infrequent browsers.
1avacyn1dWayne Hsiung, the co-founder of Direct Action Everywhere (DxE) is running for mayor of Berkeley: [] He's running on a left-leaning platform that doesn't explicitly discuss animals, but he will likely focus on animal-friendly policies. For example, he wants to create a "solar powered, pedestrian-only, and plant-based Green District." DxE has been fairly controversial in the animal advocacy world, but setting aside questions of their particular tactics, having someone so animal friendly in government could be very impactful. I just donated $50 to the campaign, which is the maximum individual donation. Does anyone know much about Berkeley politics, and if it seems like he has a shot at winning?

Sunday, July 12th 2020
Sun, Jul 12th 2020

Personal Blogposts
1Mati_Roy1dI had a friend post on Facebook (I can't find back who it was) and a friend in person (Haydn Thomas-Rose) tell me that maybe some/most antivaxxers were actually just afraid of needles. In which case, developing alternative vaccine methods, like oral vaccines, might be pretty useful. Alternative hypotheses: * antivaxxers mostly don't like that something stays in their body, and that's what differentiate them from other medicine * antivaxxers are suspicious that *everyone* needs vaccines, and that's what differentiate them from other medicine * antivaxxers are right Of course, it's probably a combination of factors, but I wonder which are the major ones. Also, even if the hypothesis is true, I wouldn't expect people to know the source of their belief. I wonder if we could test this hypothesis short of developing an alternative method. Maybe not. Maybe you can't just tell one person that you have an oral vaccine, and have them become pro-vaccine on the spot, but would rather need broader social validation and time to transition mentally.

Friday, July 10th 2020
Fri, Jul 10th 2020

Personal Blogposts
11evelynciara4dI think we need to be careful when we talk about AI and automation not to commit the lump of labor fallacy [] . When we say that a certain fraction of economically valuable work will be automated at any given time, or that this fraction will increase, we shouldn't implicitly assume that the total amount of work being done in the economy is constant. Historically, automation has increased the size of the economy, thereby creating more work to be done, whether by humans or by machines; we should expect the same to happen in the future. (Note that this doesn't exclude the possibility of increasingly general AI systems performing almost all economically valuable work [/posts/G9Zc3yaT2q2rZXBbL/will-agi-cause-mass-technological-unemployment]. This could very well happen even as the total amount of work available skyrockets.)
10vaidehi_agarwalla4dMini Collection [] - Non-typical EA Movement Building Basically, these are ways of spreading EA ideas, philosophies or furthering concrete EA goals in ways that are different from the typical community building models that local groups use. * EA for non-EA People: External Movement Building []by Danny Lipsitz * Community vs Network [] by David Nash * Question: What values do EAs want to promote? [] * Focusing on career and cause movement building [] by David Nash Suggestions welcome!
5Mati_Roy4dEvery once in a while, I see someone write something like "X is neglected in the EA Community". I dislike that. The part about "in the EA Community" seems almost always unnecessary, and a reflection of a narrow view of the world. Generally, we should just care about whether X is neglected overall.

Wednesday, July 8th 2020
Wed, Jul 8th 2020

15BenMillwood6dThough betting money is a useful way to make epistemics concrete, sometimes it introduces considerations that tease apart the bet from the outcome and probabilities you actually wanted to discuss. Here's some circumstances when it can be a lot more difficult to get the outcomes you want from a bet: * When the value of money changes depending on the different outcomes, * When the likelihood of people being able or willing to pay out on bets changes under the different outcomes. As an example, I saw someone claim that the US was facing civil war. Someone else thought this was extremely unlikely, and offered to bet on it. You can't make bets on this! The value of the payout varies wildly depending on the exact scenario (are dollars lifesaving or worthless?), and more to the point the last thing on anyone's minds will be internet bets with strangers. In general, you can't make bets about major catastrophes (leaving aside the question of whether you'd want to), and even with non-catastrophic geopolitical events, the bet you're making may not be the one you intended to make, if the value of money depends on the result. A related idea is that you can't sell (or buy) insurance against scenarios in which insurance contracts don't pay out, including most civilizational catastrophes, which can make it harder to use traditional market methods to capture the potential gains from (say) averting nuclear war. (Not impossible, but harder!)

Tuesday, July 7th 2020
Tue, Jul 7th 2020

13KR7dThought experiment for longtermism: if you were alive in 1920 trying to have the largest possible impact today, would the ideas you came up with without the benefit of hindsight still have an effect today? I find this a useful intuition pump in general. If someone says "X will happen in 50 years" I think of myself looking at 2020 from 1970, asking how many of that sort of prediction I made then would have been accurate now. The world in 50 years is going to be at least as hard to imagine (hopefully more, given exponential growth) to us as the world of today would have from 1970. What did we know? What did we completely miss? What kinds of systematic mistakes might we be making? articles with an importance score of 9 or 10 []

Monday, July 6th 2020
Mon, Jul 6th 2020

11EdoArad8dConvergence (in Economics) [] is the idea that poorer countries will grow faster than rich countries, and as a result they would eventually converge. In my naive intuition I always imagined richer countries (or sub-communities in them) developing faster than lower income countries by some form of accelerating Endogenous Growth []. I would be very interested in reading someone's take on the relevance of these considerations to EA, as I notice my world-view is very dependent on my beliefs on convergence. It feels important both for global poverty and for longtermism - I'd expect a multi-power world if we will have convergence and a singleton if we'd have a strong divergence, and I think that there can be convincing arguments here .

Load More Days