FWIW, I found the interview with SBF to be quite fair, and imho it presented Sam in a neutral-to-positive light (though perhaps a bit quirky). Teddy's more recent reporting/tweets about Sam also strike me as both fair and neutral to positive.
Hmm, culturally YIMBYism seems much harder to do in suburbs/rural areas. I wouldn't be too surprised if the easiest ToC here is to pass YIMBY-energy policies on the state level, with most of the support coming from urbanites.
But sure, still probably worth trying.
I thought YIMBYs were generally pretty in favor of this already? (Though not generally as high a priority for them as housing.) My guess is it would be easier to push the already existing YIMBY movement to focus on energy more, as opposed to creating a new movement from scratch.
Not just EA funds, I think (almost?) all random, uninformed EA donations would be much better than donations to an Index fund considering all charities on Earth.
if one wants longtermism to get a few big wins to increase its movement building appeal, it would surprise me if the way to do this was through more earning to give, rather than by spending down longtermism's big pot of money and using some of its labor for direct work
I agree – I think the practical implication is more "this consideration updates us towards funding/allocating labor towards direct work over explicit movement building" and less "this consideration updates us towards E2G over direct work/movement building".
because of scope insensitivity, I don't think potential movement participants would be substantially more impressed by $2*N billions of GiveDirectly-equivalents of good per year vs just $N billions
Agree (though potential EAs may be more likely to be impressed with that stuff than most people), but I think qualitative things that we could accomplish would be impressive. For instance, if we funded a cure for malaria (or cancer, or ...) I think that would be more impressive than if we funded some people trying to cure those diseases but none of the people we funded succeeded. I also think that people are more likely to be attracted to AI safety if it seems like we're making real headway on the problem.
I think you answered your own question? The index fund would just allocate in proportion to current donations, reducing both overhead for fund managers and the necessity to trust the managers' judgement (other than for deciding which charities do/don't qualify to begin with). I'd imagine the value of the index fund might increase as EA grows and the number of manager-directed funds increases (as many individual donors wouldn't know which direct fund to give to, and the index fund would track donations as a whole, including to direct funds).
This looks good! One possible modification that I think would enhance the model would be an arrow from "direct work" or "good in the world" to "movement building" – I'd imagine that the movement will be much more successful in attracting new members if we're seen as doing valuable things in the world.
Presumably someone (or a group) would have to create a list (potentially after creating an explicit set of criteria), and then the list would be updated periodically (say, yearly).
Should there be an "EA Donation Index Fund" that allows people to simply "donate the market" (similar to how index funds like the S&P500 allow for simply buying the market)? This fund could allocate donations to EA orgs in proportion to the total donations that those funds receive (from EA sources?) over the year (it would perhaps make sense for there to be a few such funds – such as one for EA as a whole, one for longtermism, one for global health and development, etc).
I see a few potential benefits:
• People who want to donate effectively (and especia... (read more)
FWIW, I don't think there's a cost in academia for looking a little bit different if doing so makes you look a bit better (at least if we're talking about within the US – other countries may be different). Yes, an unkept, big bushy beard would presumably be a negative (though less so in academia than in other professions), but stylish hairstyles like Afro buns or cornrows might even be a slight positive.
Lysenkoism was used by central planners to attempt to improve Soviet agricultural output, and, unsurprisingly, exacerbated famines. This is just one example of how dumb Soviet central planners were on critical issues. I doubt the Soviet space program would have worked as well as it did if the thinking of their rocket scientists was at a similar level to that of those running their economy.
DSCC's goals are just to elect democrats – they don't consider, for instance, how different democrats differ on EA criteria such as biosecurity. Donating to particularly aligned candidates (especially in primaries) is probably higher value than donating to existing (non-EA) funds.
I agree more nuance in the headline would have been better (eg., if it included the word "potentially" to say "There's potentially a role for small EA donors in campaign finance"), but note that's effectively what the body of the piece says, such as here: "consider that election campaign contributions might be a way in which you can have a substantial impact as a small donor" (emphasis added).
“Economics can be harder than rocket science: the Soviet Union was great at rocket science”
This is a good quote, but it seems a little unfair. The Soviet's rocket scientists were brilliant scientific thinkers, while their economic planners really were not. I don't think we have clear evidence one way or the other regarding how well central planning would work if the central planners were particularly smart people with good epistemic hygiene.
"Hey, I think I'm going to mingle some. [Optional: This was interesting/Thanks for telling me about XYZ, I'll look into it/Good luck with ABC/whatever makes sense given the context]"
Yeah, I think the community response to the NYT piece was counterproductive, and I've also been dismayed at how much people in the community feel the need to respond to smaller hit pieces, effectively signal boosting them, instead of just ignoring them. I generally think people shouldn't engage with public attacks unless they have training in comms (and even then, sometimes the best response is just ignoring).
Hmm, thinking personally, my tweets are definitely more off the cuff and don't live up to the same standard of rigor as my academic papers. I think this is reasonable, since that's what people are expecting from tweets vs academic papers, so I expect the audience will update differently based on them. Also, it's probably good for society/the marketplace of ideas for there to be different venues with different standards (eg., op-eds vs news articles; preprints vs peer-reviewed papers, etc). The case here seems potentially* somewhat similar (let's say, hypot... (read more)
"the EPA has ranked us either number one or two of US companies in pollution reduction initiatives"
This kinda makes me laugh, because the only way to be the company that reduces their pollution the most is to be polluting a ton in the first place. This is like saying "I know I'm a hero, because in the past year I've reduced the annual number of people I've killed more than anyone else".
Reminds me of Nixon's famous invocation of the third derivative:
When campaigning for a second term in office, U.S. President Richard Nixon announced that the rate of increase of inflation was decreasing, which has been noted as "the first time a sitting president used the third derivative to advance his case for reelection."
I regret taking the pledge
I feel like you should be able to "unpledge" in that case, and further I don't think you should feel shame or face stigma for this. There's a few reasons I think this:
Here is the relevant version of the pledge, from December 2014:
I recognise that I can use part of my income to do a significant amount of good in the developing world. Since I can live well enough on a smaller income, I pledge that for the rest of my life or until the day I retire, I shall give at least ten percent of what I earn to whichever organisations can most effectively use it to help people in developing countries, now and in the years to come. I make this pledge freely, openly, and sincerely.
A large part of the point of the pledge is to bind your ... (read more)
the additional risk to a healthy young person is probably a much smaller sacrifice than 10% of one's lifetime earnings
FWIW, I'm also against people saying "EAs should give at least 10% of their income to charity" – this makes people who don't want to make that sort of commitment feel unwelcome, and my sense is that rhetoric along those lines has hurt movement growth.
Pedantic, but I'm somewhat uncomfortable with the rhetoric of whether EAs "should" sign up for this (as in, they have an obligation to do so, which they are failing to live up to if they don't), given the personal risks involved. (I think it's reasonable to have a discussion on the object-level question of whether signing up scores well by EA lights – I'm not objecting to that – though I don't personally have a formed opinion on this question either way.)
I think a better framing might be projects that Open Phil and other funders would be inclined to fund at ~$X (for some large X, not necessarily 100M), and have cost-effectiveness similar to their current last dollar in the relevant causes or better.
I think I disagree and would prefer Linch's original idea; there may be things that are much more cost-effective than OPP's current last dollar (to the point that they'd provide >>$100M of value for <<$100M to OPP), but which can't absorb $X (or which OPP wouldn't pay $X for, due to other reasons).
Michael Dickens has already done a bunch of work on this
Can you link to this work?
Arguably yes. Early British abolitionists were clearly influenced by American abolitionists, and abolitionism in Britain (and to a lesser degree America) were major factors in the success of abolitionism in other countries. The big uncertainties here are: 1) how deterministic vs stochastic was the success of abolitionism, and 2) even if it was very stochastic/we got "lucky", how important was Lay in particular for tipping success over the edge.
The other thing I'll say about this is to read Will MacAskill's book on longtermism (What We Owe the Future) when ... (read more)
Some more good news: it looks like the US is going to be spending $555B over the next 10 years to combat climate change. Hopefully a decent chunk of this will be spent somewhat effectively.
Benjamin Lay. Probably did more than anyone else to kick off the abolitionist movement. There's a not-too-crazy story under which if not for him, slavery might still be common throughout the world today. (And under the same world model, the further rights advances/moral circle expansion that followed abolitionism – e.g., women's rights, gay rights, animal rights, etc – likely wouldn't have occurred either.)
Was he causally responsible for British, etc, abolitionism and not just in America?
Btw I started reading his pamphlet against slavery, and I really appreciate this intro:
Written for a General Service, by him that truly and sincerely desires the present and eternal Welfare and Happiness of all Mankind, all the World over, of all Colours, and Nations, as his own Soul; BENJAMIN LAY.
I think the update is less about attempting to become a multi-billionaire vs direct work, and more about attempting to become a multi-billionaire over other E2G work.
I think one large argument against what you're saying is that spending/direct work attracts more people to the movement (some of which will do E2G), and might even have a higher ROI just looking at the movement's financials than investing/E2G (this argument comes from Owen here).
Also, since there are so few people now in a position to do direct work, it seems like the value of a marginal person doing so is quite high, and much higher than the equivalent labor of the marginal person to do EA-funded work in the future once we've figured out how to scale up o... (read more)
I like Bostrom and Shulman's compromise proposal (below) – turn 99.99% of the reachable resources in the universe into hedonium, while leaving 0.01% for (post-)humanity to play with.
Some people at FHI have had random conversations about this, but I don't think any serious work has been done to address the question.
"If/When the monitoring of transformative AI systems becomes necessary, the AI Act ensures that the European Union will have institutions with plenty of practice."
It's true that setting up institutions earlier allows for more practice, and I suspect the act is probably good on the whole, but it's also worth considering potential negative aspects of setting up institutions earlier. For example:
"I often read that we should be wary of backlash in case anti immigrant parties get into power, but if that's stopping us pass immigration measures those parties are getting what they want anyway."
This assumes that the only negative aspect of anti-immigrant parties is their anti-immigrant stance. If they're also worse on other metrics as well, then the logic doesn't necessarily hold.
Hmm, I'm not sure if that's true. People really like animals, people find emerging technology/futurism interesting, and even some of the weirder ideas (eg., philosophy of mind, aliens) are captivating to people (at least when dumbed down somewhat). Contrast these ideas with wonky political ideas like monetary policy or open borders, and I'd guess that EA-issues come out ahead of neoliberal issues on interest.
Personal anecdote possibly relevant for 2): EA Global 2016 was my first EA event. Before going, I had lukewarm-ish feelings towards EA, due mostly to a combination of negative misconceptions and positive true-conceptions; I decided to go anyway somewhat on a whim, since it was right next to my hometown, and I noticed that Robin Hanson and Ed Boyden were speaking there (and I liked their academic work). The event was a huge positive update for me towards the movement, and I quickly became involved – and now I do direct EA work.
I'm not sure that a different ... (read more)
"War seems to be the only endeavor Americans feel good about"
As an American, I found this statement to be unnecessarily hostile. I know you're being hyperbolic, but I think the forum would be better if it didn't have language like this.
Also the cost of sound, and possibly outside pollution (though that can be addressed with HEPA filters & ozone filters)
"There is a part of me which finds the outcome (a 30 to 40% success rate) intuitively disappointing"
Not only do I somewhat disagree with this conclusion, but I don't think this is the right way to frame it. If we discard the "Very little information" group, then there's basically a three-way tie between "surprisingly successful", "unsurprisingly successful", and "surprisingly unsuccessful". If a similar amount of grants are surprisingly successful and surprisingly unsuccessful, the main takeaway to me is good calibration about how successful funded grants are likely to be.
"I definitely don't think that a world without suffering would necessarily be a state of hedonic neutral, or result in meaninglessness"
Right, it wouldn't necessary be natural – my point was your definition of Type III allowed for a neutral world, not that it required it. I think it makes more sense for the highest classification to be specifically for a very positive world, as opposed to something that could be anywhere from neutral to very positive.
If you expect your donation to be ~10x more valuable if one political party is in power, then it probably makes more sense to just hold* your money until they are in power. I suppose the exception here would be if you don't expect the opportunity to come up again (eg., if it's about a specific politician being president, or one party having a supermajority), but I don't see a Biden presidency as presenting such a unique opportunity.
*presumably actually as an investment
Thank you jackva. Great points on this specific example.
In general, suppose we didn't think this was a special moment. Then essentially this means we think 'investing to give' also presents a good opportunity. If 'investing to give' is also 10x CCF under Trump, then indeed you would want to just wait and either give under Biden or invest to give. But if 'investing to give' is only 5x CCF, then we're in the scenario I discussed under 'More general context'. So, fair point, I have added a sentence to the main post to explicitly rule out 'investing to give' b... (read more)
We have two things going on here beyond just partisan switch (discussed in more detail in the report) that do make this a special moment unlikely to re-occur.
(1) Elevated importance: The importance of the 2020 election for climate policy was much elevated because of COVID-related stimulus spending, the difference between Trump-Biden is much starker than the difference Trump-Clinton was in 2016 because of the much enlarged policy opportunity.
(2) Carbon lock-in: the leverage that US climate policy has is declining sharply as its main benefits in terms ... (read more)
So I like this idea, but I think the exclusively suffering-focused viewpoint is misguided. In particular:
"In a Type III Wisdom civilization, nothing and no one has to experience suffering at all, whether human, non-human animal, or sentient AI"
^this would be achieved if we had a "society" entirely of sentient AI that were always at hedonic neutral. Such lives would involve experiencing zero sense of joy, wonder, meaning, friendship, love, etc – just totally apathetic sensory of the outside world and meaningless pursuit of activity. It's hard to imagine thi... (read more)
I'm not sure how well the analogy holds. With GPL, for-profit companies would lose their profits. With the AI Safety analog, they'd be able to keep 100% of their profits, so long as they followed XYZ safety protocols (which would be pushing them towards goals they want anyway – none of the major tech companies wants to cause human extinction).
This is correct.
So framing this in the inverse way – if you have a windfall of time from "life" getting in the way less, you spend that time mostly on the most important work, instead of things like extra meetings. This seems good. Perhaps it would be good to spend less of your time on things like meetings and more on things like research, but (I'd guess) this is true whether or not "life" is getting in the way more.
It seems like one solution would be to pay people more. I feel like some in EA are against this because they worry high pay will attract people who are just in it for the money - this is an argument for perhaps paying people ~20% less than they'd get in the private sector, not ~80% less (which seems to be what some EA positions pay relative to the skills they'd want for the hire).
Thank you for this post, I thought it was valuable. I'd just like to flag that regarding your recommendation, "we could do more to connect “near-term” issues like data privacy and algorithmic bias with “long-term” concerns" - I think this is good if done in the right way, but can also be bad if done in the wrong way. More specifically, insofar as near-term and long-term concerns are similar (eg., lack of transparency in deep learning means that we can't tell if parole systems today are using proxies we don't want, and plausibly could mean that we won't kno... (read more)
Humans seem like (plausible) utility monsters compared to ants, and many religious people have a conception of god that would make Him a utility monster ("maybe you don't like prayer and following all these rules, but you can't even conceive of the - 'joy' doesn't even do it justice - how much grander it is to god if we follow these rules than even the best experiences in our whole lives!"). Anti-utility monster sentiments seem to largely be coming from a place where someone imagines a human that's pretty happy by human standards, and thinks the words "orders of magnitude happier than what any human feels", and then they notice their intuition doesn't track the words "orders of magnitude".
"
"
Because if (say) only 1/10^30 stars has a planet with just the right initial conditions to allow for the evolution of intelligent life, then that fully explains the Great Filter, and we don't need to posit that any of the try-try steps are hard (of course, they still could be).