When I started thinking about these issues last year, my thinking was pretty similar to what you said.
I thought about it and considered that for the biggest risks, investors may have a selfish incentive to avoid to model and manage the impacts that their companies have on the wider world -- if only because the wider world includes the rest of their own portfolio!
It turns out I was not the first to think of this concept, and its name is Universal Ownership. (I've described it on the forum here)
Universal Ownership doesn't go far enough, in my view, but it's a step forward compared to where we are today, and gives people an incentive to care about social impacts (or social "profits")
As I alluded to in a comment to KHorton's related post, I believe SoGive could grow to spend something like this much money.
SoGive's core idea is to provide EA style analysis, but covering a much more comprehensive range of charities than the charities currently assessed by EA charity evaluators.
As mentioned there, benefits of this include:
Full disclosure: I founded SoGive.
This short comment is not sufficient to make the case for SoGive, so I should probably right up something more substantial.
I believe that in time EA research/analysis orgs both could and should spend > $100m pa.
There are many non-EA orgs whose staff largely sit at a desk, and who spend >$100m, and I believe an EA org could too.
Let's consider one example. Standard & Poors (S&P) spent c.$3.8bn in 2020 (source: 2020 accounts). They produce ratings on companies, governments, etc. These ratings help answer the question: "if I lend the company money, will I get my money back". Most major companies have a rating with S&P. (S&P also does other things like indices, however I'm sure the ratings bit alone spends >$100m p.a.)
S&P for charities?
Currently, very few analytical orgs in the EA space aim to have as broad a coverage of charities as S&P does of companies/governments/etc.
However an org which did this would have significant benefits.
I find these arguments convincing enough that I founded an organisation (SoGive) to implement them.
At the margin, GiveWell is likely more cost-effective, however I'd allude to Ben's comments about cost-effectiveness x scale in a separate comment.
S&P for companies' impact?
Human activity, as measured by GDP (for all that measure has flaws) is split roughly 60%(ish) by for-profit companies, 30%(ish) by governments and a little bit from other things (like charities).
(Note: I haven't thought very carefully about whether "S&P for companies' impact" really is a high-impact project)
An EA-specific agency would have to be low-bono, offering major discounts to EA orgs - otherwise it would be indistinguishable from the countless existing for-profit agencies.
I think this needs justification. I'm currently aching for a tech agency I would trust, and I'm happy to pay market rates to get a decent agency to implement some EA projects.
If you told me you had such an agency, and it was peopled with EAs, that would be even better!
If you told me that you needed to pay your developers decent salaries, I could cope with paying a small premium.
Not sure how good the Robert Miles channel is for mums (mine might not be particularly interested in his channel!) but for communicating about AI risk Robert Miles is (generally) good and I second this recommendation
Just a quick comment to say that SoGive would be well positioned to be another consultancy providing services like Rethink.
We have collaborated with Rethink before (see this research) and are in moderately frequent informal contact with them.
We have c10 analysts who are a mixture of volunteers and staff. Mostly volunteers, as the organisation is funded solely by me, and there is a limit to what I can afford.
I'm open to the idea of us doing more of this sort of work, although it would need a discussion before we commit to anything, as we already have a separate strategic focus in mind.
Thanks for this, good question!
I agree with your point that investors have some blind spots, in particular that some areas of finance are not good at incorporating long term considerations.
So I think you're right, the ESG concept probably could achieve some impact by helping address that sort of blind spot.
I probably should have said something more like "To judge whether I, as someone working in ESG investing, is having material impact, we need to see if I'm actually having an influence on scenarios where there is a tension/trade-off". This is because ESG-related work is already working to address that blind spot.
Sorry I didn't spot your comment earlier. Yes, more than happy for this to be shared more widely. Feel free to use this link if you wish: https://effectiveesg.com/2021/05/24/esg-investing-needs-thoughtful-trade-offs/
Thanks very much for pointing out that error -- now corrected. I've looked at the answers which have been recorded, and they include an answer which includes comments similar to the comment you made here, so I think it's been recorded. Thank you very much!
I have now expanded the acronym when it's used in the first sentence.