Agreed. I was very excited, a few years ago, that a friend was able to talk to someone about having the local Tomchei Shabbos offer to pay for job certifications and similar training for people out of work and living off of those types of charity, in order to help them find jobs - as the Rambam says, this is the highest form of charity. So I think that concrete steps like this are absolutely possible, and worth pursuing if and when you can find them.
First, I'm not opposed to other drawing inspiration from my views!Second, every time there is a disaster, I try to remind people that disaster risk mitigation is 3-10x as effective as disaster response. If you really want to give, I'd say you could support work like IPRED, rather than the red cross response teams. I don't know where specifically to give for that type of work, however, and I'd love for someone to do a deep dive on the most effective risk mitigation for disaster opportunities.
So if there is some technology which makes invading easier than defending or info-sec easier than hacking, it might not change the balance of power much because each actor needs to do both. If offense and defense are complements instead of substitutes then the balance between them isn’t as important.
This seems reasonable to explain many past data points, but it's not at all reassuring for bioweapons, which is a critical reason to be concerned about offense-defense balance of future technologies, and one where there really is a clear asymmetry. So to the extent that the reasoning for explaining the past is correct, it seems to point to worrying more about AIxBio, rather than be reassured about it,
This seems reasonable, but I'm always nervous about any program that is using carbon offsets, because there are significant financial incentives to game the system and take credit for reductions, instead of just reducing emissions. It's better than doing nothing, but if you're interested, there's an easy way to set up recurring monthly donations to the fund Vox recommended, here: https://www.givingwhatwecan.org/en-US/charities/founders-pledge-climate-change-fund
This is a super interesting point, and I'm completely unsure what it should imply for what I actually do, especially since returns are uncertain and prepaying at a discount under possible bankruptcy / extinction risk at an uncertain rate is hard - all of which (probably unfortunately) means I'm just going to keep doing the naive thing I've done so far.
That's a really interesting question, but I don't invest my charitable giving, though I do tithe my investment income, once gains are realized. My personal best guess is that in non-extinction scenarios, humanity's wealth increases in the long-term, and opportunities to do good should in general become more expensive, so it's better to put money towards the present.
There's a fair amount of discussion in AI alignment about what outer alignment requires, and how it's not just pursuing goals of a single person who is supposed to be in control.As a few examples, you might be interested in some of these:https://www.alignmentforum.org/posts/Cty2rSMut483QgBQ2/what-should-ai-owe-to-us-accountable-and-aligned-ai-systems
It was crossposted via the alignment forum
I've found that if a funder or donor asks, (and they are known in the community,) most funders are happy to privately respond about whether they decided against funding someone, and often why, or at least that they think it is not a good idea and they are opposed rather than just not interested.
I don't necessarily care about the concept of personal identity over time, but I think there's a very strong decision-making foundation for considering uncertainty about future states. In one framing, I buy insurance because in some future states it is very valuable, and in other future states it was not. I am effectively transferring money from one future version of myself to another. That's sticking with a numerical identity view of my self, but it's critical to consider different futures despite not having a complex view of what makes me "the same person".
But I think that if you embrace the view you present as obvious for contractualists, where we view future people fundamentally differently than present people, and do not allow consideration of different potential futures, you end up with some very confused notions about how to plan under uncertainty, and can never prioritize any types of investments that pay off primarily in even the intermediate-term future. For example, mitigating emissions for climate change should be ignored, because we can do more good for current people by mitigating harms rather than preventing them, and should emit more and ignore the fact that this will, with certainty, make the future worse, because those people don't have much of a moral claim. And from a consequentialist viewpoint - which I think is relevant even if we're not accepting it as a guiding moral principle - we'd all be much, much worse off if this sort of reasoning had been embraced in the past.