Journalist and media studies professor, virtual communities consultant, climate change, urban planning and transit activist
I am looking for ways to promote awareness of the movement here in Canada - particularly in the sparsely populated Atlantic provinces where I am - and to meet and learn from others in the area with similar interests.
Happy to provide constructive feedback on public messaging or academic research in the field.
This does not address one possible use of alternative proteins - feeding them to domesticated carnivorous animals. Obviously many EA folks might prefer that we don’t eat such animals or have them as pets but if we do it would be better if their food did not have an adverse climate impact or did not involve more animal suffering (or both!). Alt proteins here would not need to have the same taste as the foods they replace, be tasty to humans, or pass strict safety guidelines - they would just need to be minimally acceptable to (and digestible by/safe for) their ‘target’ animals. I recall those breeding insects as food (not alt proteins of course) are targeting this marketplace. Any thoughts? Research? Evidence of success?
The EA movement has no single leader but communication and recruitment are of course vital to its continuation, so there are mechanisms for senior figures to make their views known. It is not necessary for the movement to "take sides" in particular political battles, but the fact that Musk has funded EA work, is friends with key EA figures and has taken actions (like the all-out attack on USAID) that run directly counter to mainstream EA thinking suggests to me EA needs to make its concerns clear.
If a public figure or organization (political or not) is aligned with the EA movement in the public mind (because of donations, common positions or their stated adherence to EA principles) and does things that are not consistent with EA values, the movement needs to condemn those actions.
Framing this as taking a political stand is misleading and misguided. I happen to oppose Musk's politics but that is not why I urge EA leadership to oppose him - it's the ethical lapses I expect EA to condemn. If a populist left wing leader in the US scrapped USAID because it was an instrument of American imperialism and the money was needed at home to fund social programming, I'd argue EA should condemn that in a similar manner.
"it does matter that there is one credible environmental org aligned with Democrats (there are also Republican climate orgs, like ClearPath) that pushes for it, it can make the difference between this being entirely dismissed as fossil fuel or Manchin demand to being an option that has support from clearly climate-motivated actors. "... actually, this is just one more reason why what the CATF is doing is retrograde. Supporting and aiding development of CCS for, say, cement making is OK in my book and there is plenty of room for experiments there that are directly applicable to future need. This podcast is good on this point. The danger is that learning how to capture emissions from near end of life coal plants in the US may not tell us all that much that is useful to deploy CCS where it is needed.
This is the kind of thing I would like to see more of. I would not invest myself because all investments seem to be in individual projects - I would want to be able to invest in some fashion in a "basket" of companies and/or projects (ideally through a large, well-known investment company like Vanguard...)
I know your long run goals are the least "binding" but I would encourage you to be a little more cautious and evidence-based in your approach to growth as an intervention. Economic growth clearly offers benefits overall in developing countries but it would surely be safer to say your objective should be to study the relationship between economic growth and human development and work to understand the circumstances in which aid that enhances economic growth in particular circumstances is more effective than alternative forms of aid.
While this is an important question to consider, it is by no means clear that we could get any short term consensus about how moral alignment should be implemented. In practical terms, if an AI (AGI) intelligence is more long lived and/or more able to thrive and bring benefits to others of its kind, wouldn't it be moral to value its existence over that of a human being? Yet I think you would struggle to get AI scientists to embed that value choice into their code. Similarly, looking 'down' the scale, in decisions where the lives or wellbeing of humans had to be balanced against animals, I am not sure there would be much likelihood of broad agreement on the relative value to attach to each in such cases.
I would encourage further research and advocacy on this point but at best this will be a long, long process. And you might not be happy with the eventual outcome!
At the moment there are no established guidelines in this area I am aware of in the existing not-AI-related space (though I have not looked hard...) but if AI-related research/discussion did establish such guidelines, it might cause the guidelines to be propagated out into the rest of the policy world and set a precedent.