All of Lee_Sharkey's Comments + Replies

Yes. Thanks. Link has been amended. Author was in fact Luke Muehlhauser, so labeling it 'WEF' is only partially accurate.

I agree that more of both is needed. Both need to be instantiated in actual code, though. And both are useless if researchers don't care implement them.

I admit I would benefit from some clarification on your point - are you arguing that the article assumes a bug-free AI won't cause AI accidents? Is it the case that this arose from Amodei et al.'s definition?: “unintended and harmful behavior that may emerge from poor design of real-world AI systems”. Poor design of real world AI systems isn't limited to being bug-free, but I can see why this might have caused confusion.

0
kbog
7y
I'm not - I'm saying that when you phrase it as accidents then it creates flawed perceptions about the nature and scope of the problem. An accident sounds like a onetime event that a system causes in the course of its performance; AI risk is about systems whose performance itself is fundamentally destructive. Accidents are aberrations from normal system behavior; the core idea of AI risk is that any known specification of system behavior, when followed comprehensively by advanced AI, is not going to work.

I don't think it's an implausible risk, but I also don't think that it's one that should prevent the goal of a better framing.

AI accidents brings to my mind trying to prevent robots crashing into things. 90% of robotics work could be classed as AI accident prevention because they are always crashing into things.

It is not just funding confusion that might be a problem. If I'm reading a journal on AI safety or taking a class on AI safety what should I expect? Robot mishaps or the alignment problem? How will we make sure the next generation of people can find the worthwhile papers/courses?

I take the point. This is a potential outcome, and I see the apprehension, but I think it's ... (read more)

0
WillPearson
7y
I would do some research onto how well sciences that have suffered brand dilution do. As far as I understand it Research institutions have high incentives to 1. Find funding 2. Pump out tractible digestible papers See this kind of article for other worries about this kind of thing. You have to frame things with that in mind, give incentives so that people do the hard stuff and can be recognized for doing the hard stuff. Nanotech is a classic case of a diluted research path, if you have contacts maybe try and talk to Erik Drexler, he is interested in AI safety so might be interested in how the AI Safety research is framed. Fair enough I'm not wedded to motivation (I see animals having motivation as well, so not strictly human). It doesn't seem to cover Phototaxis which seems like the simplest thing we want to worry about. So that is an argument against motivation. I'm worded out at the moment. I'll see if my brain thinks of anything better in a bit.

"permanent loss of control of a hostile AI system" - This seems especially facilitative of the science-fiction interpretation to me.

I agree with the rest.

I think this proposition could do with some refinement. AI safety should be a superset of both AGI safety and narrow-AI safety. Then we don't run into problematic sentences like "AI safety may not help much with AGI Safety", which contradicts how we currently use 'AI safety'.

To address the point on these terms, then:

I don't think AI safety runs the risk of being so attractive that misallocation becomes a big problem. Even if we consider risk of funding misallocation as significant, 'AI risk' seems like a worse term for permitting conflation of w... (read more)

0
WillPearson
7y
I agree it is worth reconsidering the terms! The agi/narrow ai distinction is beside the point a bit, I'm happy to drop it. I also have an AI/IA bugbear so I'm used to not liking how things are talked about. Part of the trouble is we have lost the marketing war before it even began, every vaguely advanced technology we have currently is marketing itself as AI, that leaves no space for anything else. AI accidents brings to my mind trying to prevent robots crashing into things. 90% of robotics work could be classed as AI accident prevention because they are always crashing into things. It is not just funding confusion that might be a problem. If I'm reading a journal on AI safety or taking a class on AI safety what should I expect? Robot mishaps or the alignment problem? How will we make sure the next generation of people can find the worthwhile papers/courses? AI risks is not perfect, but is not at least it is not that. Perhaps we should take a hard left and say that we are looking at studying Artificial Intelligence Motivation? People know that an incorrectly motivated person is bad and that figuring out how to motivate AIs might be important. It covers the alignment problem and the control problem. Most AI doesn't look like it has any form of motivation and is harder to rebrand as such, so it is easier to steer funding to the right people and tell people what research to read. It doesn't cover my IA gripe, which briefly is: AI makes people think of separate entities with their own goals/moral worth. I think we want to avoid that as much of possible. General Intelligence augmentation requires its own motivation work, but one so that the motivation of the human is inherited by the computer that human is augmenting. I think that my best hope is that AGI work might move in that direction.

What do you have in mind? If it can't be fixed with better programming, how will they be fixed?

0
kbog
7y
Better decision theory, which is much of what MIRI does, and better guiding philosophy.

Hi Carrick,

Thanks for your thoughts on this. I found this really helpful and I think 80'000 hours could maybe consider linking to it on the AI policy guide.

Disentanglement research feels like a valid concept, and it's great to see it exposed here. But given how much weight pivots on the idea and how much uncertainty surrounds identifying these skills, it seems like disentanglement research is a subject that is itself asking for further disentanglement! Perhaps it could be a trial question for any prospective disentanglers out there.

You've given examples... (read more)

1
Kathy_Forth
7y
For five years, my favorite subject to read about was talent. Unlike developmental psychologists, I did not spend most of my learning time on learning disabilities. I also did a lot of intuition calibration which helps me detect various neurological differences in people. Thus, I have a rare area of knowledge and an unusual skill which may be useful for assisting with figuring out what types of people have a particular kind of potential, what they're like, what's correlated with their talent(s), what they might need, and how to find and identify them. If any fellow EAs can put this to use, feel free to message me.

Hey kbog, Thanks for this. I think this is well argued. If I may, I'd like to pick some holes. I'm not sure if they are sufficient to swing the argument the other way, but I don't think they're trivial either.

I'm going to use autonomy in weapons systems in favour of LAWs for reasons argued here(see Takeaway 1).

As far as I can tell, almost all considerations you give are to inter-state conflict. The intra-state consequences are not explored and I think they deserve to be. Fully autonomous weapons systems potentially obviate the need for a mutually benefic... (read more)

3
kbog
7y
Hmm, everything that I mentioned applies to interstate conflict, but they don't all only apply to interstate conflict. Intrastate conflicts might be murkier and harder to analyze, and I think they are something to be looked at, but I'm not sure how much it would modify the main points. The assumptions of the expected utility theory of conflict do get invalidated. Well, firstly, I am of the opinion that most instances of violent resistance against governments in history were unjustified, and that a general reduction in revolutionary violence would do more good than harm. Peaceful resistance is more effective at political change than violent resistance anyway (https://www.psychologytoday.com/blog/sex-murder-and-the-meaning-life/201404/violent-versus-nonviolent-revolutions-which-way-wins). You could argue that governments will become more oppressive and less responsive to peaceful resistance if they have better security against hypothetical revolutions, though I don't have a large expectation for this to happen, at least in the first world. Second, this doesn't have much to do with autonomous weapons in particular. It applies to all methods by which the government can suppress dissent, all military and police equipment. Third, lethal force is a small and rare part of suppressing protests and dissent as long as full-fledged rebellion doesn't break out. Modern riot police are equipped with nonlethal weapons; we can expect that any country with the ability to deploy robots would have professional capabilities for riot control and the deployment of nonlethal weapons. And crowd control is based more on psychology and appearances than application of kinetic force. Finally, even when violent rebellion does break out, nonstate actors such as terrorists and rebels are outgunned anyway. Governments trying to pacify rebellions need to work with the local population, gather intelligence, and assert their legitimacy in the eyes of the populace. Lethal autonomous weapons are ter

Not sure if it's just me but the board_setup.jpg wouldn't load. I'm not sure why, so I'm not expecting a fix, just FYI. Cards look fun though!

Hey Denkenberger, thanks for your comment. I too tend to weight the future heavily and I think there are some reasons to believe that DPR could have nontrivial benefits with this set of preferences. This was in fact why, as Michael mentions above:

"FWIW, I think the mental health impact of DPR is about 80% of it's value, but when I asked Lee the same question (before telling him my view) I think he said it was about 30% (we were potentially using different moral philosophies)." because I think DPR's effects on the far future could be the source

... (read more)

(note: Lee wrote the pain section but we both did editing, so I'm unsure whether to use 'I' or 'we' here)

I align myself Michael's comment.

Really enjoying the Oxford Prioritisation Project!

One of my favourite comments from the Anonymous EA comments was the wish that EAs would post "little 5-hour research overviews of the best causes within almost-random cause areas and preliminary bad suggested donation targets." (http://effective-altruism.com/ea/16g/anonymous_comments/)

I expect average OPP posts take over 5 hours, and 5 hours might be an underestimate of the amount of time it would take for a useful overview without prior subject knowledge. But both that comment and the OPP seem to... (read more)

I'd second that - it's not the most wieldy text editor. Not sure how easy it would be to remedy. Going into the HTML gets you what you want in the end, but it's undue effort.

Hi Tom,

Great to hear that it's been suggested. By the looks of it, it may be an area better suited to an Open Philanthropy Project-style approach, being primarily a question of policy and having a sparser evidence base and impact definition difficulties. I styled my analysis around OPP's approach (with some obvious shortcomings on my part).

I could have done better in the analysis to distinguish between the various types of pain. As you say, they are not trivial distinctions, especially when it comes to treatment with opioids.

I'd be interested to hear your take on the impact of pain control on the nature of medicine and the doctor-patient dynamic. What trends are you concerned about hastening exactly?

0
Elizabeth
7y
I'm concerned in almost the opposite direction- that having the doctor as gatekeeper to something the patient legitimately needs, with the threat of taking it away if the patient doesn't look sick enough, corrupts the doctor-patient relationship and the healing process.
1
tomstocker
7y
The shift from patient as recipient of medicine from clinician with authority (old style developed world and much of e.g. Africa) to patient as consumer. There are good and bad things with this transition. Pain, pain control and patient perceptions are just under-studied as a nexus. Not a reason not to go ahead, just my biggest worry with this stuff. (I personally don't think risk of death / side effects are much of a worry at all when we're talking about opioid availability in inpatient settings).

Thanks for those links. It's troubling to hear about some of the promotional techniques described, though I can't say it's surprising.

While US regulations have been developed decades before their equivalents in many developing countries, it's not necessarily a mark of quality. In the article I refer to less desirable idiosyncrasies of the US health system (i.e. aspects of the consumer-based model; pain as a fifth vital sign), which have exacerbated the crisis there and will not necessarily exist in some developing countries. Yet, while I hesitate to paint... (read more)

Hi Austen,

Just to clarify, I'm not trying to promote or demote the cause. I'm aware that the cause is of interest to some EAs, and as someone in a good position to inform them, I thought something like this would help them make their own judgement :) I'm just sharing info and trying to be impartial.

Sorry if I my comments gave the impression that I thought it was low priority and financially inefficient. To reiterate I've withheld strong judgement on its priority, and I said I haven't looked into its financial efficiency compared with other interventions.... (read more)

Hi Austen,

Thanks for all your interest!

I would have to disagree on your point about corporate influence. Pharma has been implicated heavily in the current opioid epidemic in the States and elsewhere. See the John Oliver expose for a light introduction (link above). In this area, if anything, there is even more reason to be wary of pharma influence because the product is so addictive when misused. Pharma does do some positive work - I'm aware of a BMS-funded training hospice in Romania (Casa Sperantei). I've only heard good things about it.

You've hit on ... (read more)

1
Austen_Forrester
7y
I'm a little confused as to why you are trying to promote a cause that you think is low priority and financially inefficient. Anyhow, I don't find your anti-corporate stance convincing. Lack of corporate involvement (ie. to distribute analgesics) is the missing link preventing some countries from having functional palliative care in some countries according to Dr. Foley. It's important to work with all stakeholders for progress in any space. The affordable anti-retroviral movement made progress by working with pharma. The risks of working with industry in the public's interest can be minimized with appropriate controls. Access to properly regulated mobile phone, internet, and financial services have greatly helped the poor and require corporate involvement. Unfortunately, they are underutilized because SJW's like to maintain their purity and reject corporate involvement. I hope your palliative care movement doesn't suffer from the same self-defeating ideology.

Hi Elizabeth,

I focus on opioid medications for the same reasons that I don't focus on cannabinoids:

  • There isn't strong expert consensus on the effectiveness of cannabinoids. This may change as the search for alternative drugs, particularly for chronic pain, intensifies. While there are some areas that will likely see their use increase (you justly highlight neuropathic pain), my understanding is that current evidence doesn't reliably indicate their effectiveness for severe pain. All this said, there are good reasons to believe they are understudied, both

... (read more)
3
tomstocker
7y
I'm really happy to see this article - I mentioned it to givewell a while ago but they weren't interested. For me this hits what I see as the moral priority more than a lot of the other projects and options on the go. Simple, complex and neuropathic pains respond differently to different anaelgasics. Opioids v effective for simple pain over the short term, e.g. surgeries, broken bones etc. Neuropathic and complex pain don't have good equivalents for pain relief and patients are stuck with cannabinoids, anti-epileptics and anti-depressants (or, ketamine, ironically, if it wasn't so restricted in the developed world for its noted impact on organ function). Not a reason not to back access to opioids in the developing world. Least well explored part IMO is the impact of pain control on the nature of medicine and doctor-patient interaction etc. because the west may have fallen into a trap that it may be a shame to hasten in the developing world.

Thanks Austen!

Yes, it's actually very large. So large, in fact, that it seems to be taken for granted by many people in those countries with low access.

I've withheld strong judgement on whether it should be a cause area that other EAs should act on. I think it could be a particularly attractive area for EAs with certain ethical preferences.

Before funding programmes such as PPSG's, further analyses of the cause and the programme(s) are warranted. I'd be open to suggestions on how to carry those out from anyone with experience, or I'd be happy to discuss the matter with anyone interested in taking it forward themselves.

This link is expired unfortunately. Is there anything CEA/the forum could do to collate existing translations?

Indeed. And essay competitions are not like examinations; plagiarism only needs to be detected in potential winners and can be achieved by googling fragments of the essays.