N

nathanhb

9 karmaJoined May 2021

Comments
10

Cool, thanks. Sorry for sounding a bit hostile, I'm just really freaked out by my strongly held inside view that we have less than 10 years until some really critical tipping point stuff happens. I'm trying to be reasonable and rational about this, but sometimes I react emotionally to comments that seem to be arguing for a 'things will stay status quo for a good while, don't worry about the short term ' view.

Calling my strongly held inside view 'fringe' doesn't carry much weight as an argument for me. Do you have actual evidence of your longer than 10 years timelines view?

I hold the view that important scientific advancements tend to come disproportionately from the very smartest and most thoughtful people. My hope would be that students smart enough to be meaningfully helpful on the AGI alignment problem would be able to think through and form correct inside views on this.

If we've got maybe 2-3 years left before AGI, then 2 years before starting is indeed a large percentage of that remaining time. Even if we have more like 5-10... maybe better to just starting trying to work directly on the problem as best you can than let yourself get distracted by acquiring general background knowledge.

So here's a funny twist. I personally have been longtermist since independently coming to the conclusion that it was the correct way to conceptualize ethics, around 30 years ago. I realized that I cared about as much about future people as current people far away. After some thought, I settled on global health/poverty/rule-of-law as one of my major cause areas because I believe that bringing current people out of bad situations  is good not only for them but for the future people who will descend from them or be neighbors of their descendants, etc. Also, because society as a whole sees these people suffering, thinks and talks about them, and adjusts their ethical decision-making in accordance. I think that the common knowledge that we are part of a worldwide society which allows children to starve or suffer from cheaply curable diseases negatively influences our perception of how good our society COULD be. I think my other important cause areas, like existential risk mitigation and planning for sustainable exponential growth are also important, but.... Suppose we succeed at these two, and fail at the first. I don't want a galaxy spanning civilization which allows a substantial portion of its subjects to suffer hugely from preventable problems the way we currently allow our fellow humans to suffer. That wouldn't be worse than no-galaxy-spanning-civilization, but it would be a lot less good than one which takes reasonable care of its members.

Sounds like good work. Trying to get the right information to the right people who can pass that on to decision makers seems useful. 

 

Two things I would like you to consider having some cached thoughts on to offer to the right people would be:

1. food security under adverse circumstances (e.g. highlights from the work of AllFed like seaweed farming).

2. Global cooling work (e.g. Silver Lining, and doubling up with reef-protection by seawater spraying over reefs), and the combined work of carbon sequestration and soil quality improvement of Biochar as an agricultural practice. Biochar is neat because it's advantageous to the individual farmer, as well as being good for the world, so the local incentive structure is aligned.

Ok, these are all pretty simplified and I think you'd need to understand a bit more background to move the conversation on from these points, but not bad. Except for the 'why not merge with AI' response. That one is responding as if 'merge' meant physically merge, which is not what is meant by that argument. The argument means to merge minds with the AI, to link brains and computers together in some fashion (e.g. neuralink) such that there is high bandwidth information flow, and thus be able to build an AGI system which contains a human mind.

Here's a better argument against that: human values are not permeated all throughout the entire human brain, it is possible to have a human without a sense of morality. You cannot guarantee that a system of minds including a human mind (whether running on biological tissue or computer hardware) would in fact be aligned just because the human portion was aligned pre-merge. It is a strange and novel enough entity to need study and potentially alignment just like any other novel AGI prototype.

I know this is the conclusion of a report, so it's too late to suggest an addition now, but I think that in the future it would be very much worth looking into the political dispute resolution framework Polis described in a recent 80k hours podcast interviewing Audrey Tang. 

I've been a big fan of your work for many years now, and I'm really glad you're taking a stab at explaining Longtermism! I remember being in school many years ago, before the EA movement was a thing, and trying to explain my intuitions around Longtermism to others and finding it difficult to communicate. I feel like we really need some introductory material which is helpful for building intuitions for a target of something like 4th grade reading level, to be approachable by a wider audience and by kids.

Hey, so, on a similar but slightly different topic, I focused on substantial cognitive enhancement of adults through genetic modification when I was in grad school. Because I thought it was potentially a very valuable cause area, especially if it could be used to selectively enhance technological progress on key issues like AGI alignment and better governance, and of course, on better forms of cognitive enhancement which would then potentially positively spiral. Eventually, reluctantly, I came to the conclusion that I am highly confident it is possible but that it just isn't feasible to implement (given legal restrictions on experimentation, funding constraints, etc) before AGI. And if it can't be done before AGI, well... why bother? AGI risks and benefits just utterly overwhelms the issue. I think pharmacological enhancement has much smaller potential payoffs (a few IQ points rather than many), but does seem enough more tractable in terms of timelines that it's at least relevant to consider.

As a former neuroscientist, I am very in favor of brain plastination, particularly of 'soft' plastination (e.g. the CLARITY technique). I think there's a huge amount of long-term cost savings and safety improvement from making the brain info stable at room temperature and easily readable multiple times over without destruction. The value of soft plastination is that you can still infuse antibody labels and such into the tissue to non-harmfully study the tissue. 

The only reason I am not encouraging people to donate to this as a cause area is that it seems so small impact compared to x-risk causes. The people who will live in the next couple hundred years are potentially dwarfed by the people of the far future  so... lets just try to get to the far future without being too self-obsessed. 

That being said, I do hope to preserve my brain and the brains of my spouse and children and their children, etc. It's just a small consideration all things considered.

Load more