MatthewDahlhausen

Posts

Sorted by New

Topic Contributions

Comments

Does it make sense for EA’s to be more risk-seeking in earning to give?

Thanks for laying out the math here. Given the high variance we've seen in the community, it does suggest E2G'rs should go for high-risk high-reward choices.

There is one further consideration that I think needs to be added to the higher variance scenario which comes from concentrating donations from fewer people.

If all the E2G people donate to the same charity, then it makes sense to have higher variance in giving as you laid out. However if the E2G people give to different charities, then donations are now skewed towards the preferred charity of the lucky entrepreneur.

One way to think of this is in terms of dollars donated per thought-hour. Assume each donor spends 10 hours thinking about where to donate, and that the lucky entrepreneur spends 20 hours deciding where to donate. In the lower variance scenario, there are ($10,000 * 10 / (10 * 10) hours) = $1,000 dollars donated per thought hour. In the higher variance scenario, there are ( (($40,000 / (10 hrs * 8 people)) * $40,000 + ($275,000 / 20 hours)* $275,000) / $315,000 = $12,067 dollars donated per thought-hour. We've traded a 3.15x increase in donations for ~4x (12067 / (3.15*1000)) less thoughtfulness.

So while it's great to have more money to donate, it'd be nice for the E2G givers to pre-commit in advance to a target charity, fund, donor lottery, or collective decision making process to not dilute the thoughtfulness behind donations. Another option is that the lucky entrepreneur could simply allocate donation decisions for a small portion of their giving to the other E2G people (say $5k each) and it would gain back all the lost thoughtfulness.

Some potential lessons from Carrick’s Congressional bid

Thanks for clarifying. I agree with you that if the main reason you are supporting a candidate is their potential impact on long-term future oriented policy then the opposing candidate doesn't matter much beyond a simple estimate of their electoral chances vs. your candidate.

Some potential lessons from Carrick’s Congressional bid

Can you elaborate?

I expected this critique when I wrote that claim. I think I understand why someone would see the other candidate as being insignificant. Let me know if I'm presuming the wrong reasons here:

It seemed that the Flynn campaign message was all about pandemic preparedness. At least that's how it was marketed in EA spaces. And it's mostly true that there isn't anybody in congress championing pandemic preparedness. If you are a single-issue voter on pandemic preparedness or AGI, I can see how the opposing candidate doesn't matter to you; your candidate will do more for the cause than any other candidate, regardless of party, who likely doesn't care or have an opinion on it. It's more of a binary. If you care more about existential risks much more than anything else, this reasoning make sense.

But if you care about other causes like animal welfare, local or global poverty, climate change, democracy health, etc., chances are the other candidate does have views on it. If they are a progressive democratic candidate like Andrea Salinas, EA-aligned poverty alleviation, climate change action, and voting reform are significant parts of their platform. Also, one of the key issues in the U.S. presently is whether we are going to retain a semblance of a democracy or if elections are going to be decided by super PACs and gerrymandered state legislators. There is a significant party divide on support for EA-aligned voting reform and bans on alternative voting methods. If you care about being able to influence elections through public appeals, maintaining a functioning democracy matters even if you are a single-issue voter. There is a clear partisan divide. Given an equal chance of winning, would you rather the EA candidate run opposed to someone like Andrea Salinas or Madison Cawthorn?

Some potential lessons from Carrick’s Congressional bid

I'm glad to see EAs running for political office explicitly as EAs. But I hope that the attitude and approach by the EA community towards the Flynn campaign doesn't become the norm. I felt that the campaign was intrusive and pushy, and the standard of care was much lower than what we expect for other causes/interventions.

Some points:

  • I got direct campaign emails from the Flynn campaign, even though I never signed up for campaign emails. Presumably some EA organization gave the Flynn campaign a list of emails or they scraped it off some EA website. I would prefer EA organizations to keep contact information private and adopt an "opt-in" policy for sharing emails. I don't want to get spammed by people asking money for causes or campaigns, especially if EA political campaigns become more frequent.
  • One of my local group co-organizers got a personal appeal from the Flynn campaign in the final days of the election asking them to fly to Oregon to do door-knocking for the campaign saying how it was high expected value. Not only is it a troubling sign that the campaign did not already have a large, local population of door-knockers, but the campaign didn't seem to consider the terrible optics of having people getting paid to fly in from out-of-state to do door-knocking for a few days. This seems anti-democratic.
  • This primary was flooded with billionaire Super PAC money. This is part of an ongoing trend of billionaires buying political power and is detested within the progressive community. It's undemocratic, and we should be cautious about engaging in politics through billionaire money, even if it is 'our' billionaire, and especially if the EA candidate is running in a progressive democratic primary. Even if you think democracy is just an instrumental good you should be worried about the capacity for billionaires to heavily influence elections.
  • The campaign language and EA posts about it, including this one, center entirely around Flynn and not the winner Andrea Salinas, who is also an excellent candidate. The values and views of other candidates the EA candidate is displacing should be a significant consideration in whether to support the campaign. It may be more successful to make EA a constituency for lawmakers, rather than just supporting EA candidates running against progressives.

Furthermore, I'm not sure the information value alone was worth the millions spent on this campaign by the EA community. The 'lessons learned' listed in this forum post seem obvious. I googled "tips for running for congress" and in 10 minutes read through several resources that gave most of these same lessons learned. I expect a 30 min call with a Democrat strategist, of which there are several in the EA movement, would have also given the same lessons learned, and probably would have given a more accurate prediction on the election outcome than the prediction markets cited in this post. Flynn got ~half the vote of the leading candidate, which is more of a blowout than as suggested by the prediction markets. I frequently see parts of the EA community think they've found some new fascinating insight (EA movement learns about 'X') when in fact they are just columbusing knowledge from other communities. It's as if some piece of knowledge must be blessed or learned directly by a well-known EA before it's accepted by the community at large. A little less hubris and a little more humility towards other knowledge domains would save quite a bit effort and resources when learning about things like running for congress.

EA and the current funding situation

Here's a prediction: In the not-too-distant future, someone who calls themselves an effective altruist is going to purchase a private plane or helicopter and justify it saying the time it saves and the amount of extra good they can do with that saved time is worth the expense. The community is going to have a large population that disagrees and sees it as a wasteful extravagance, and a smaller but vocal population that will agree with the purchase as a worthwhile tradeoff, especially if that person is part of a sub-community within EA that is ok with more speculative expected value calculations. Instead of there being a clear, coordinated response disavowing the purchase as extravagant, the community is going to hesitate and argue about the extent to which it is good to feed utility monsters and be muted in its outward response. But that's not going to stop the wider media picking up the story. A small fraction of the population will then henceforth liken EAs to the pastors at megachurches with private jets who use do-gooder justifications for selfish purposes. And yes, you could construct some sort of hypothetical where someone needs a helicopter to more quickly fly between trolley levers to save a bunch of people. But the much more likely scenario is that someone wants a helicopter and is fine using an iffy, cursory justification for it and the trolley brakes are working just fine.

Against immortality?

Humanity and society are weird. By some cosmic fluke involving brains and thumbs we figured out how to mold the landscape to grow our food and later on figured out how to access million-year old energy deposits in the lithosphere.

We are less than two centuries out from the beginning of industrialized society and we have no clue how to balance energy and resource flows to sustain civilization beyond a few more centuries. And now some of us apes are thinking, "hey, how about we don't die?" as if the current weird state of things somehow represents some new normal of human existence.

There has been ample debate around "strong sustainability" vs. "weak sustainability", which centers on how much technological substitution can overcome increasing environmental pressures. People have been using specific, limited examples of weak sustainability being true (see debates around Limits to Growth) to argue against strong sustainability. Its one thing to argue that we can change planetary limits / carrying capacity, and another to say that those limits don't exist. Limits exist; that falls out of some basic thermodynamics.

Pursuing life extension beyond a few centuries seems reckless without figuring out how to do strong sustainability first. With limits, resources are zero-sum beyond some geologic replenishment rate; people living longer trade off against other people, non-human animal, and plant life, or it buys down the resources available to people in the future. I would expect longtermists to be especially cautious about how reckless life-extension could be given limits.

Avian influenza is causing farmers to kill millions of chickens

Vox's most recent Future Perfect newsletter linked to this piece of investigative journalism by the Intercept on the use of VSD: https://theintercept.com/2022/04/14/killing-chickens-bird-flu-vsd/ [EDIT: warning that the article includes a video that shows a chicken suffering as it dies]

An uncomfortable thought experiment for anti-speciesist non-vegans

"...seems morally okay as long as the clothes allow you to have more positive impact with your career."

Utilitarian calculations need to be justified beyond just piling up more things in the "positive" bin than the "negative" bin. An often used thought experiment is asking if it is ok if a doctor kills a healthy patient in a hospital to donate their organs to five other needy patients so that they may live. While utilitarians may justify this in the way you did, this justification looks unfounded if there is a recently deceased organ-donor in the morgue at a nearby hospital who could provide all those same organs. How is killing the healthy patient justified then? Would we see the utilitarian doctor as still justified if they said "It's annoying to have to drive over to the other hospital, fill out paperwork, get the organs, then drive back. It is still a net positive to kill the healthy patient here, and it's easier for me, so I'll just do that."?

Considering your analogy, it is easy to buy clothes that didn't require slave labor, and even if not, it is tenuous to see how a specific set of slave-produced clothes would have an overall positive benefit to your career greater than the suffering they incurred.

Bringing it back to animals, the equation isn't the negative of animal suffering against the positive of your career, it's the negative of animal suffering against the marginal career cost, if any, of switching to a vegetarian or vegan diet, which is much lower. You can understand why many would see the claim that the animal suffering is worth it in comparison to marginal personal inconvenience it saves as dubious and particularly self-serving.

A review of Our Final Warning: Six Degrees of Climate Emergency by Mark Lynas

I came away with the same impression when I read it. Thanks for taking the time to highlight specific examples of misinterpretation and lack of nuance. And for running it by the original study authors.

After reading quite a bit of climate doomer literature like Six Degrees, I've become less interested in the extent of exaggeration and portrayed helplessness and more interested in why people are telling the climate story this way. It seems counter-productive. It gives fodder to opponents of action to say the problem is exaggerated. And for the scrupulous it creates noise and the possibility for over-correction or over-reaction. I'm worried the EA movement will develop a well-founded bias to dismiss or ignore studies of potentially serious climate impacts because of the extent of media exaggeration of scientific studies. Looking forward to your climate risk report which I hope will mitigate some of the effects of bad climate science writing.

Load More