Hi, I'm thinking about a possibly new approach to AI safety. Call it AI monitoring and safe shutdown.
Safe shutdown, riffs on the idea of the big red button, but adapts it for use in simpler systems. If there was a big red button, who gets to press it and how? This involves talking to law enforcement, legal and policy. Big red buttons might be useful for non learning systems, large autonomous drones and self-driving cars are two system that might suffer from software failings and need to be shutdown safely if possible (or precipitously if the risks from hard shutdown are less than it's continued operation).
The monitoring side of thing asks what kind of registration and monitoring we should have for AIs and autonomous systems. Building on work on aircraft monitoring, what would the needs around autonomous system be?
Is this a neglected/valuable cause area? If so, I'm at an early stage and could use other people to help out.
I found this report on adaptation, which suggest adaptation with some forethought will be better than waiting for problems to get worse. Talks about things other than crops too. The headlines
I've been thinking for a while civilisational collapse scenarios impact some of the common assumptions about the expected value of movement building or saving for effective altruism. This has knock on implications to when things are most hingeist.
That said, I personally would be quite surprised if worldwide crop yields actually ended up decreasing by 10-30%. (Not an informed opinion, just vague intuitions about econ).
I hope they won't too, if we manage to develop the changes we need to make before we need them. Economics isn't magic
But I wanted to point out that there will probably be costs associated with stopping deaths associated with food shortages with adaptation. Are they bigger or smaller than mitigation by reducing CO2 output or geoengineering?
This case hasn't been made either way to my knowledge and could help allocate resources effectively.
Are there any states that have committed to doing geoengineering, or even experimenting with geoengineering, if mitigation fails?
Having some publicly stated sufficient strategy would convince me that this was not a neglected area.
I'm expecting the richer nations to adapt more easily, So I'm expecting a swing away from food production in the less rich nations as poorer farmers would have a harder time adapting as there farms get less productive (and they have less food to sell). Also farmers with now unproductive land would struggle to buy food on the open market
I'd be happy to be pointed to the people thinking about this and planning on having funding for solving this problem. Who are the people that will be funding the teaching of subsistence rice farmers (of all nationalities) how to farm different crops they are not used to etc? Providing tools and processing equipment for the new crop. Most people interested in climate change I have met are still in the hopeful mitigation phase and if they are thinking about adaptation it is about their own localities.
This might not be a pressing problem now, but it could be worth having charities learning in the space about how to do it well (or how to help with migration if land becomes uninhabitable).
 https://blogs.ei.columbia.edu/2018/07/25/climate-change-food-agriculture/ suggests that some rice producing regions might have problems soon
On 1) not being able to read the full text of the impactlab report, but it seem they just model the link between heat and mortality, but not the impact of heat on crop production causing knock on health problems. E.g. http://dels.nas.edu/resources/static-assets/materials-based-on-reports/booklets/warming_world_final.pdf suggests that each degree of warming would reduce the current crop yields by 5-15%. So for 4 degrees warming (baseline according to https://climateactiontracker.org/global/temperatures/ ), this would be 20-60% of world food supply reduction.
If governments stick to their policies (which they have been notoriously bad at so far) then the reduction would only be 10-30%. I'd expect even a 10% decrease to have massive knock on effects to the nutrition and mortality of the world. I expect that is not included in the impact lab report because it is very hard to have papers that encompass the entire scope of the climate crisis.
Of course there could be a lot of changes to how and where we grow crops to avoid these problems, but making sure that we manage this transition well, so that people in the global south can adopt the appropriate crops for whatever their climate becomes seems like something that could use some detailed analysis. It seems neglected as far as I can tell, there may be simple things we can do to help. It is not mainstream climate change mitigation though, so might fit your bill?
As currently defined, long termists have two possible choices.
There are however other actions that may be more beneficial.
Let us look again at the definition of influential again
a time ti is more influential (from a longtermist perspective) than a time tj iff you would prefer to give an additional unit of resources, that has to be spent doing direct work (rather than investment), to a longtermist altruist living at ti rather than to a longtermist altruist living at tj.
While direct work is not formally defined it can be seen here to be mainly referred to as near-term existential risk mitigation.
The most obvious implication, however, is regarding what proportion of resources longtermist EAs should be spending on near-term existential risk mitigation versus what I call ‘buck-passing’ strategies like saving or movement-building.
What happens if the answer is neither option? What are the other levers we have on the future. One is that we might be able to take actions that change the expected rate of return. Perhaps the expected rate of return on investments is very bad, but there are actions you can take to increase it to more normal levels? Or there is low-hanging fruit for vastly increasing the expected rate of return on investments in the long term.
So let us introduce another type of influential (and another version of hingeness). Influential-i, that is the time that has the most influential action being ones that attempt to impact interest rates.
a time ti is more influential-i (from a longtermist perspective) than a time tj iff you would prefer to give an additional unit of resources, that has to be spent doing direct work to alter investment rates (rather than normal investment itself or X-risk reduction), to a longtermist altruist living at ti rather than to a longtermist altruist living at tj.
So what would increase the rate of return on investment? New energy sources with high energy returned on energy invested could increase the rate of return on investment. For example, if you manage to help invent nuclear fusion you would increase the amount of cleaner energy available to civilisation giving more resources to future altruists to use to solve problems.
Avoiding vast decreases in the rate of return would be actions that manage to stave of civilization collapse. Civilization collapse should be a lot higher on the radar for long term altruists, as it is.
The long-termist community would do well to look at these options when thinking about the time frame of hundreds of years.
Let's say they only mail you as much protein as one full human genome.
This doesn't make sense. Do you mean proteome? There is not a 1-1 mapping between genome and proteome. There are at least 20,000 different proteins in the human proteome, it might be quite noticeable (and tie up the expensive protein producing machines), if there were 20,000 orders in a day. I don't know the size of the market, so I may be off about that.
I will be impressed if the AI manages to make a biological nanotech that is not immediately eaten up or accidentally sabotaged by the soup of hostile nanotech that we swim in all the time.
There is a lot of uranium in the sea, only because there is a lot of sea. From the pages I have found, there is only 3 micrograms of U per liter, and 0.72 percent is U235. To get the uranium 235 (80% enriched 50Kg bomb) required for a single bomb you would need to process roughly 18 km3 of sea water or 1.8 * 10^13 liters.
This would be pretty noticeable if done in a short time scale (you might also have trouble with diluting the sea locally if you couldn't wait for diffusion to even out the concentrations globally).
To build 1 million nukes you would need more sea water than the Mediterranean (3.75 million km3)
There might be a further consideration, people might not start or fund impactful startups if there wasn't a good chance of getting investment. The initial investors (if not impact oriented), might still be counting on impact oriented people to buy the investment. So while each individual impact investor is not doing much in isolation, collectively they are creating a market for things that might not get funded otherwise. How you account for that I'm not sure.