Good point and good fact.
My sense, though, is that if you scratch most "expand the moral circle" statements you find a bit of implicit moral realism. I think generally there's an unspoken "...to be closer to its truly appropriate extent", and that there's an unspoken assumption that there'll be a sensible basis for that extent. Maybe some people are making the statement prima facie though. Could make for an interesting survey.
Love to see these reports!
I have two suggestions/requests for 'crosstabs' on this info (which is naturally organised by evaluator, because that's what the project is!):
Is anyone keeping tabs on where AI's actually being deployed in the wild? I feel like I mostly see (and so this could be a me problem) big-picture stuff, but there seems to be a proliferation of small actors doing weird stuff. Twitter / X seems to have a lot more AI content, and apparently YouTube comments do now as well (per conversation I stumbled on while watching some YouTube recreationally - language & content warnings: https://youtu.be/p068t9uc2pk?si=orES1UIoq5qTV5TH&t=2240)
I think this is a really compelling addition to EA portfolio theory. Two half-formed thoughts:
Does portfolio theory apply better at the individual level than the community level? I think something like treating your own contributions (giving + career) as a portfolio makes a lot of sense, if you're explicitly trying to hedge personal epistemic risk. I think this is a slightly different angle on one of Jeff's points: is this "k-level 2" aggregate portfolio a 'better' aggregation of everyone's information than the "k-level 1" of whatever portfolio emerges from everyone individually optimising their own portfolios? You could probably look at this analytically... might put that on the to-do list.
At some point what matters is specific projects...? Like when I think about 'underfunded', I'm normally thinking there's good projects with high expected ROI that aren't being done, relative to some other cause area where the marginal project has a lower ROI. Maybe my point is something like - underfunding and accounting for it should be done at a different stage of the donation process, rather than in looking at overall what the % breakdown of the portfolio is. Maybe we're more private equity than index fund.
I wonder if there might be particularly strong regional effects to this - maybe Goa had quite a large dog population, quite a lot of rabies, or quite dense dog/human populations (affecting rabies, bite, and transmission incidences).
I think there could be room for further research to identify whether there would be better-looking (sub-country) regions - though like Helene_K found, data would be difficult.
Hey Alexander - thanks for the write-up! I found it useful as a local, and it seems valuable to be sharing/coordinating on this globally.
One thing that occurred to me would be to zoom in on the sectors of the economy that are exposed to AI. I think that in Australia, it might be relatively more concentrated than elsewhere - specifically in education, which is one of our biggest exports (though it gets accounted for domestically I think).
That could mean:
Can you add / are you comfortable adding anything on who "us" is and which orgs or what kinds of orgs are hesitant? Is your sense this is universal, or more localised (geographically, politically, cause area...)?