Ok, and any advice for reaching out to trusted-but-less-prestigious experts? It seems unlikely that reaching out to e.g. Kevin Esvelt will generate a response!
Great post, I really appreciate an in-depth review of research on reducing sleep need.
I wrote some arguments for why reducing sleep is important here:
https://harsimony.wordpress.com/2021/02/05/why-sleep/
I also submitted a cause exploration app:
https://harsimony.wordpress.com/2022/07/14/cause-exploration-prize-application/
Your post includes substantially more research than mine and I would encourage you to reformat it and submit it to the OpenPhil's Cause Exploration Prize. I'm happy to help you with edits or combine our efforts!
This kind of thing could be made more sophisticated by making fines proportional to the harm done, requiring more collateral for riskier projects, or setting up a system to short sell different projects. But simpler seems better, at least initially.
Have you thought about whether it could work with a more free market, and not necessarily knowing all of the funders in advance?
Yeah, that's a harder case. Some ideas:
People undertaking projects could still post collateral on their own (or pre-commit to accepting a fine under certain conditions). This kin
This kind of thing could be made more sophisticated by making fines proportional to the harm done
I was thinking of this. Small funders could then potentially buy insurance from large funders in order to allow them to fund projects that they deem net positive even though there's a small risk of a fine that would be too costly for them.
I proposed a simple solution to the problem:
This eliminates the "no downside" problem of retroactive funding and makes some net-negative projects unprofitable.
The amount of collateral can be chosen adaptively. Start with a small amount and ...
Crypto's inability to take debts or enact substantial punishments beyond slashing stakes is a huge limitation and I would like it if we didn't have to swallow that (ie, if we could just operate in the real world, with non-anonymous impact traders, who can be held accountable for more assets than they'd be willing to lock in a contract.)
Given enough of that, we would be able to implement this by just having an impact cert that's implicated in a catastrophe turn into debt/punishment, and we'd be able to make that disincentive a lot more proportional to the s...
Related: requiring some kind of insurance that pays out when a certificate becomes net-negative.
Suppose we somehow have accurate positive and negative valuations of certificates. We can have insurers sell put options on certificates, and be required to maintain that their portfolio has positive overall impact. (So an insurer needs to buy certificates of positive impact to offset negative impact they've taken on.)
Ultimately what's at stake for the insurer is probably some collateral they've put down, so it's a similar proposal.
I make a slightly different anti-immortality case here:
https://harsimony.wordpress.com/2020/11/27/is-immortality-ethical/
Summary: At a steady state of population, extended lifespan means taking resources away from other potential people. Technology for extended life may not be ethical in this case. Because we are not in steady state, this does not argue against working on life extension technology today.
One reason people make this claim is that many models of economic growth depend on population growth. Like you noted, there are lots of other ways to grow the economy by making each individual more productive (lower poverty, more education, automating tasks, more focus on research, etc.).
But crucially, all of these measures have diminishing returns. Let's say that in the future everyone on earth has a PhD, is highly productive, and works in an important research field. In this case the only way to continue growing economy is through population growth, sinc...
Thanks for writing this. Great to see people encouraging a sustainable approach to EA!
I want to tell you that taking care of yourself is what’s best for impact. But is it?
I claim that this is true:
These are all true, but (as Julia alludes to) not necessarily enough to lead us to correctly conclude that the conclusion we really want to believe is the correct one.
(Of course, we don't live in the most inconvenient world, so wanting to believe in a conclusion is only some evidence against the veracity of a conclusion, not necessarily decisive evidence)
I think another possible route around gambling restrictions to prediction markets is to ensure all proceeds go to charity, but the winners get to choose which charity to donate to. I wrote about this more here:
https://forum.effectivealtruism.org/posts/d43f6HCWawNSazZqb/charity-prediction-markets
I have noticed that few people hold the view that we can readily reduce AI-risk. Either they are very pessimistic (they see no viable solutions so reducing risk is hard) or they are optimistic (they assume AI will be aligned by default, so trying to improve the situation is superfluous).
Either way, this would argue against alignment research, since alignment work would not produce much change.
Strategically, it's best to assume that alignment work does reduce AI-risk, since it is better to do too much alignment work (relative to doing too little alignment work and causing a catastrophe).
Though I am not super familiar with the research, it seems that in general more indirect democracy functions better due to the fact that voters have little incentive to cast informed votes, whereas representatives are incentivized to make informed decisions on voters behalf.
I think the book 10% Less Democracy can point you to relevant research on this topic. It was discussed briefly on MR here.
You may also want to check out Caplan's The Myth of the Rational Voter for research along similar lines.
Great post!
To reiterate what AppliedDivinityStudies said, I would love to hear more about proposed solutions to this problem. For example, what do you think of this paper on preventing supervolcanic eruptions?
Interventions that may prevent or mollify supervolcanic eruptions
Of course, EA funds can do all of these things, and I appreciate the work they are doing.
I think it is important to be explicit about the structure of EA funds, meta-charities, and charitable foundations: they typically involve pooling money from many donors and putting funding decisions in the hands of a few people. This is not a criticism! It makes a lot of sense to turn these decisions over to knowledgeable, committed specialists in the EA community. This approach likely improves the impact of peoples donations over the counterfactual where people give ...
I agree that the EA funds (and meta-charities like Givewell), are great opportunities to give and can help balance the flow of donations going to different charities. But I don't think that these funds have entirely solved the collective action problem in charitable giving. Rather, they aggregate money from many donors and turn over funding decisions to a handful of experts. These experts are doing great work, and I really respect them, but it doesn't hurt to consider how we might do things even better!
If we really did have a system for small donors to coo...
Which of your writings (including things like blog posts) do you consider most important for making the world a better place? Assuming many people agreed to deeply consider your arguments on one topic, what would you have them read?
Wonderful idea, it looks great so far.
I appreciate that the list of charities one can donate to is relatively restricted since this prevents people from publicly donating to highly political charities for signalling purposes.
I also like that there is a dashboard showing how your donations are being spent.
One thing I find a little strange is the "lives saved" total (whereas the "CO2 Reduced" total seems perfectly normal to me). I don't have a good reason for this, its just a personal feeling. Perhaps instead show the total spent or fraction spent on different causes areas rather than assert the overall impact of the donations?
Thanks for posting this, this seems like valuable work.
I'm particularly interested in using MLOSS to intentionally shape AI development. For example, could we identify key areas where releasing particular MLOSS can increase safety or extend the time to AGI?
Finding ways to guide AI development towards narrow and simple AI models can extend AI timelines, which is complimentary to safety work:
https://www.lesswrong.com/posts/BEWdwySAgKgsyBzbC/satisf-ai-a-route-to-reducing-risks-from-ai
In your opinion, what traits of a particular piece of MLOSS determine whether it increases or decreases risk?