L

Larks

14796 karmaJoined

Comments
1403

Topic contributions
1

I don't understand what the title has to do with the body of the text. 'Meme' either means a unit of cultural information or a funny picture with text; EA is definitely not 'just' the latter, and it is the former to the same extent that environmentalism or any other movement is.

It's probably to do with the fact that being in finance (banker, consultant, whatever) is pretty much something any jackass can do. Pushing money around, dealing with people, risk assessment, you can pretty much just turn your brain off really, especially compared to technical fields. But banking is not only not a very useful job, it's also incredibly morally dubious to work for companies that do fuck all for the world aside from scam customers and invest the money in fossil fuel industries and terrorist organizations. Like OK, yeah, better you have the job than some schmuck who wouldn't donate anything and would spend the money on cars and luxury homes, but there are other jobs you can get that are not only useful, but comparative in their income.

This is probably the single most ignorant paragraph I have ever read about the financial industry.[1] The sort of finance job EAs do for EtG are not easy, they are some of the most competitive and challenging jobs in the world. Nor is it the case that scamming, fossil fuel investment and investing in terrorist organizations (how would that even work? Does Al-Qaeda pay dividends?) is all they do. This guy is apparently aware that financial companies invest in fossil fuel companies - does he think there is some other industry that handles investment in all other types of firms, including green infrastructure? Finance plays a number of important roles in society, from facilitating transactions to matching savers and borrowers to allowing people to adjust their risk to vetting and due diligence to forecasting the future. These are valuable services and people voluntarily pay to use them. There are some valid criticisms of the industry but this guy is just so ill-informed I seriously think your ""friend"" should start by reading a basic wikipedia article on the subject.

... and medicine and STEM are only comparable in income to the extent that you can compare them and observe them to be lower.

 

  1. ^

    Or possibly the management consulting industry? Who knows, not the author, that's for sure.

Thanks very much for this detailed analysis and write-up, I really appreciate when people exhibit this level of self-evaluation.

  • For the far future to be a significant factor in moral decisions, it must lead to different decisions compared to those made when only considering the near future. If the same decisions are made in both cases, there is no need to consider the far future.
  • Given the vastness of the future compared to the present, focusing on the far future, risks harming the present. Resources spent on the far future could instead be used to address immediate problems like health crises, hunger, and conflict.

This seems a very strange view. If we knew the future would not last long - perhaps a black hole would swallow up humanity in 200 years - then the future would not be very vast, it would have less moral weight, and aiding it would be less demanding. Would this really leave longtermism more palatable to the critics?

One new thing to me in that thread was that the California Legislature apparently never overrides the governor's vetoes. I wonder why this is the case there and not elsewhere.

My understanding is a lot of that is just that consumers didn't want them. From the first source I found on this:

Safety-conscious car buyers could seek out—and pay extra for—a Ford with seatbelts and a padded dashboard, but very few did: only 2 percent of Ford buyers took the $27 seatbelt option.

This is not surprising to me given that, even after the installation of seatbelts became mandatory, it was decades until most Americans actually used them. Competition encouraged manufacturers to 'cut corners' on safety in this instance precisely because that was what consumers wanted them to do.

I think you probably want:

( CO2 averted * Social Cost of Carbon ) - Economic Costs

Your equation will give an infinite score to a policy which could avert 1 gram of CO2 for zero cost, even if it was totally non-scalable.

It seems strange to me to call a perfectly normal work week 'neglecting leisure'. Typically when people are thinking about the argument that altruistic considerations mean they should work more than normal people they are talking about unusually long work weeks.

Larks
30
8
2
1

I think experimentation with new approaches is good, so for that reason I'm a fan of this.

When I evaluate your actual arguments for this particular mechanism design though, they seem quite weak. This makes me worry that, if this mechanism turns out to be good, it will only be by chance, rather than because it was well designed to address a real problem.

To motivate the idea you set up a scenario with three donors, varying dramatically in their level of generosity:

  • Donors 1 and 3 both think animals matter a lot, but Donor 3 is skeptical of the existing charities. Donor 1 doesn’t have access to the information that makes Donor 3 skeptical. It’s unclear if Donor 3 is right, but aggregating their beliefs might better capture an accurate view of the animal welfare space.
  • Donor 2 knows a lot about their specific research area, but not other areas, so they just give within GCRs and not outside it. They’d be happy to get the expertise of Donors 1 and 3 to inform their giving.
  • All three are motivated by making the world better, and believe strongly that other people have good views about the world, access to different information, etc.

I struggle to see how this setup really justifies the introduction of your complicated donation pooling and voting system. The sort of situation you described already occurs in many places in the global economy - and within the EA movement - and we have standard methods of addressing it, for example:

  • Donor 3 could write an article or an email about their doubts.
  • Donor 1 could hire Donor 3 as a consultant.
  • Donor 1 could delegate decisions to Donor 3.
  • Donor 2 can just give to GCR, this seems fine, they are a small donor anyway.
  • They could all give to professionally managed donation funds like the EA funds.

What all of these have in common is they attempt to directly access the information people have, rather than just introducing it in a dilute form into a global average. The traditional approach can take a single expert with very unusual knowledge and give them major influence over large donors; your approach gives this expert no more influence than any other person.

This also comes up in your democracy point:

Equal Hands functions similarly to tax systems in democracies — we don’t expect people who pay more in taxes to have better views about who should be elected to spend that tax money.

The way modern democratic states work is decidedly not that everyone can determine where a fraction of the taxes go if they pay a minimum of tax. Rather, voters elect politicians, who then choose where the money is spent. Ideally voters choose good politicians, and these politicians consult good experts. 

One of the reasons for this is that is would be incredibly time consuming for individual voters to make all these determinations. And this seems to be an issue with your proposal also - it simply is not a good use of people's time to be making donation decisions and filling in donation forms every month for very small amounts of money. Aggregation, whether through large donors (e.g. the donation lottery) or professional delegation (e.g. the EA funds) is the key to efficiency.

The most bizarre thing to me however is this argument (emphasis added):

Donating inherently has huge power differentials — the beliefs of donors who are wealthier inevitably exerts greater force on charities than those with fewer funds. But it seems unlikely that having more money would be correlated with having more accurate views about the world.

Perhaps I am misunderstanding, or you intended to make some weaker argument. But as it stands your premise here, which seems important to the entire endeavor, seems overwhelmingly likely to be false. 

There are many factors which are correlated both with having money money and having accurate views about the world, because they help with both: intelligence, education, diligence, emotional control, strong social networks, low levels of chronic stress, low levels of lead poisoning, low levels of childhood disease... And there are direct causal connections between money and accurate views, in both directions, because having accurate views about the world directly helps you make money (recognizing good opportunities for income, avoiding unnecessary costs, etc.) and having money helps you gain more accurate views about the world (access to information, more well educated social circle, etc.).

Even absent these general considerations, you can see it just by looking at the major donors we have in EA: they are generally not lottery winners or football players, they tend to be people who succeeded in entrepreneurship or investment, two fields which require accurate views about the world.

Maybe asks how he chooses which issues to focus on?

Good analysis, thanks for writing this up! It does seem that in general our political/regulatory system has little to no sensitivity to the dollar cost of fulfilling requirements and avoiding identifiable but small harms.

Load more