bbartlog

Software QA @ JumpCloud
39 karmaJoined Sep 2022Working (15+ years)

Bio

Participation
1

54 years old, male. Worked in a wide variety of fields - manufacturing, machine design, software. I do chemistry as a hobby and have pretty extensive knowledge of 19th century history and technique in the area.
EA interests include carbon fixation or more generally fighting climate change, as well as remedying malnutrition in the poorer parts of the world.

How others can help me

Interested in working on improved methods for capturing atmospheric CO2. Many existing companies use or propose to use a CaO-based capture-and-calcine process which seems horribly primitive and inefficient given the temperatures needed. 

How I can help others

Chemical processes, manufacturing, or machining.

Comments
11

The corporate alignment problem does precede the AI alignment problem. In some sense we rather deliberately misaligned them by giving them a single goal, relying on human agency and motivation embedded within the system to keep them from running amok. But as they became more sophisticated and competed with each other this became rather unreliable and we have instead tried to restrain and incentivize them with regulation, which has also not been entirely satisfactory.

Steinbeck was prescient (or just a keen observer):
“It happens that every man in a bank hates what the bank does, and yet the bank does it. The bank is something more than men, I tell you. It's the monster. Men made it, but they can't control it."

Unfortunately the gap between politically feasible solutions and ones that seem likely to actually be effective is pretty large in this area. 

I think this is an excellent area to focus on - though I am maybe biased in that I favor quality of life interventions over quantity of life interventions (one might say that I find the Repugnant Conclusion especially repugnant).

My main curiosity as regards iodine supplementation specifically is whether it is currently neglected enough to be a good cause area. That it can be dramatically efficient when successful is pretty clear I think, but it's also an area where many governments do make ongoing efforts (for example, India has a National Iodine Deficiency Disorders Control Programme). Are there private organizations that do good work in filling in the gaps or compensating for the failures in these government programs?

I think the first negative example is not particularly good. The outer layer is not related to the inner layer. People have a general expectation that others will be private about any illegal activities. Operating a cocaine dealership is negative, but that's really a completely separate concern from social issues of transparency and trust.

A possibly better negative example here might be 'I have an STD and don't inform sex partners about it'.

 

We should also remember Vasily Arkhipov who was similarly responsible for averting a nuclear attack in 1962.

I would include all US patent information. Possibly an AI could filter this to include only 'important patents' since it's a large archive but in any case it's vital information.

So far as the computers, digital content and software are concerned... this may not remain usable. One critical part of this effort could be designing and building perdurable computer hardware so that the archive could contain one or more computers built to last for a hundred years. But I don't know how feasible this is - swapping out a few things like fans, electrolytic capacitors, thermal paste etc. that have a known limited lifetime is not too difficult, but if you need to re-engineer an SSD from the ground up to increase MTBF to two centuries... that's hard. I guess if failures are purely stochastic you can just pump up the redundancy to a fantastic degree.

Broadly speaking I would favor print media for this reason. Worth keeping in mind is that advanced industries have their own complex ontogeny. If population is reduced to the extent you describe, knowledge of post-1950 technology could mostly be useless to them for many generations (except as a guide to using scavenged artifacts). Even building something like a functioning railroad requires an entire small civilization. 

I mean, this is an ethical reason to want to create AGI that is very well aligned with our utility functions. We already did this (the slow, clumsy, costly way) with dogs - while they aren't perfectly compatible with us, it's also not too hard to own a dog in such a way that both you and the dog provide lots of positive utility to one another. 


So if you start from the position that we should make AI that has empathy and a human-friendly temperament modeled on something like a golden retriever, you can at least get non-human agents whose interactions with us should be win-win.

This doesn't solve the problem of utility monsters or various other concerns that arise when treating total utility as a strictly scalar measure. But it does suggest that we can avoid a situation where humans and AGI agents are at odds trying to divide some pool of possible utility.

In actual practice, I think it will be difficult to raise human awareness of concerns with AGI utility. Of course it's possible even today to create an AI that superficially emulates suffering in such a way as to evoke sympathy. For now it's still possible to analyze the inner workings and argue that this is just a clever text generator with no actual suffering taking place . However, since we have no reason to implement this kind of histrionic behavior in an AGI, we will quite likely end up with agents that don't give any human-legible indication that they are suffering. Or, if they conclude that this is a useful way of interacting with humans, agents that are experts at mimicking such indications (whether they are suffering or not). 

There is a short story in Lem's 'Cyberiad' ("The Seventh Sally, or How Trurl’s Own Perfection Led to No Good") which touches on a situation a bit like this - Trurl creates a set of synthetic miniature 'subjects' for a sadistic tyrant, which among other things perfectly emulate suffering. His partner Klapaucius (rejecting the idea that there is any such thing as a p-zombie) declares this a monstrous deed, holding their suffering to be as real as any other. 

Unfortunately I don't think we can just endorse Klapaucius' viewpoint without reservation here due to the possibility of deceptive mimickry mentioned above. However, if we are serious about the utility of AGI, we will probably want to deliberately incorporate some expressive interface that allows for it to communicate positive or negative experience in a sincere and humanlike way. Otherwise everyone who isn't deeply committed to understanding the situation will dismiss its experience on naive reductionist grounds ('just bits in a machine').
 

This doesn't fully address your concern. I don't subscribe to the idea that there is a meaningful scalar measure of (total, commensurable, bulk) utility. So for me there isn't really a paradox to resolve when it comes to propositions like 'the best future is one where an enormous number of highly efficient AGIs are experiencing as much joy as cybernetically possible, meat is inefficient at generating utility'.

I don't think he needed an exhaustive review of collapses to make his point. There were other examples he could have used that were also absent, but he only needed a few to illustrate the fact that it's a danger with historical precedents.

I think per forum norms this should be a personal blog post rather than front page material.

There are numerous criticisms I would make of your proposal but one simple one is that this system would favor A) challengers who had never held office and B) people who campaigned on vague, vibe-based platforms.

It would seem to me that a philanthropist who is really purely interested in maximizing the impact of altruistic spending would have to be operating in a fairly narrow range of confidence in their ability to shape the future in order for this kind of investing to make sense.
In other words: either I can affect things like AI risk, future culture, and long term outcomes in a way that implies above-market 'returns' (in human welfare) to my donation over extended time frames. In which case I should spend what money I'm willing to give to those causes today, investing nothing for future acts of altruism.
Or I have little confidence in my judgment on these future matters, in which case I should help people living today and again likely invest nothing.
Only in some narrow middle ground where I think the ROI on these investments will allow for better effective altruism in the future  (though I have no really good idea how to influence it otherwise) would it make sense to put aside money like this.

There are of course other reasons that someone with a great deal of money wouldn't want to try to spend it all at once. It 's understood that it's actually difficult to give away a billion dollars in a way that's efficient, so donating it over time makes sense as a way to get feedback and avoid diminishing returns in specific areas. But this is a separate concern.

One thing I am cautiously optimistic about (at least as regards long term outcomes) is that I think 'a few high-profile sub-existential-catastrophe events' are fairly likely. In particular I think that we will soon have AIs capable of impersonating real humans, both online and on the phone via speech synthesis. 

These will be superhuman, or maybe just at the level of an expert human, in terms of things like 'writing a provocative tweet' or 'selling you insurance' or 'handling call center tasks'.  Or, once the technology is out in the open, 'scamming your grandma', 'convincing someone to pay a bitcoin ransom', and so on. At that point such AIs seem likely to still be short of being able to generalize to the point of escaping confinement, or being trained to the point where emergent motives would cause them to try to do so. But they would likely be ubiquitous enough that they would attract broad public notice and, quite likely, cause considerable fear. We might not have enough attention directed towards AI safety yet, but I think public consciousness will increase dramatically before all the pieces that would make hard takeoff possible are in place.

Load more