I agree that more of both is needed. Both need to be instantiated in actual code, though. And both are useless if researchers don't care implement them.
I admit I would benefit from some clarification on your point - are you arguing that the article assumes a bug-free AI won't cause AI accidents? Is it the case that this arose from Amodei et al.'s definition?: “unintended and harmful behavior that may emerge from poor design of real-world AI systems”. Poor design of real world AI systems isn't limited to being bug-free, but I can see why this might have caused confusion.
I don't think it's an implausible risk, but I also don't think that it's one that should prevent the goal of a better framing.
AI accidents brings to my mind trying to prevent robots crashing into things. 90% of robotics work could be classed as AI accident prevention because they are always crashing into things.
It is not just funding confusion that might be a problem. If I'm reading a journal on AI safety or taking a class on AI safety what should I expect? Robot mishaps or the alignment problem? How will we make sure the next generation of people can find the worthwhile papers/courses?
I take the point. This is a potential outcome, and I see the apprehension, but I think it's ...
"permanent loss of control of a hostile AI system" - This seems especially facilitative of the science-fiction interpretation to me.
I agree with the rest.
Hi Carrick,
Thanks for your thoughts on this. I found this really helpful and I think 80'000 hours could maybe consider linking to it on the AI policy guide.
Disentanglement research feels like a valid concept, and it's great to see it exposed here. But given how much weight pivots on the idea and how much uncertainty surrounds identifying these skills, it seems like disentanglement research is a subject that is itself asking for further disentanglement! Perhaps it could be a trial question for any prospective disentanglers out there.
You've given examples...
Hey kbog, Thanks for this. I think this is well argued. If I may, I'd like to pick some holes. I'm not sure if they are sufficient to swing the argument the other way, but I don't think they're trivial either.
I'm going to use autonomy in weapons systems in favour of LAWs for reasons argued here(see Takeaway 1).
As far as I can tell, almost all considerations you give are to inter-state conflict. The intra-state consequences are not explored and I think they deserve to be. Fully autonomous weapons systems potentially obviate the need for a mutually benefic...
Not sure if it's just me but the board_setup.jpg wouldn't load. I'm not sure why, so I'm not expecting a fix, just FYI. Cards look fun though!
Hey Denkenberger, thanks for your comment. I too tend to weight the future heavily and I think there are some reasons to believe that DPR could have nontrivial benefits with this set of preferences. This was in fact why, as Michael mentions above:
..."FWIW, I think the mental health impact of DPR is about 80% of it's value, but when I asked Lee the same question (before telling him my view) I think he said it was about 30% (we were potentially using different moral philosophies)." because I think DPR's effects on the far future could be the source
(note: Lee wrote the pain section but we both did editing, so I'm unsure whether to use 'I' or 'we' here)
I align myself Michael's comment.
Really enjoying the Oxford Prioritisation Project!
One of my favourite comments from the Anonymous EA comments was the wish that EAs would post "little 5-hour research overviews of the best causes within almost-random cause areas and preliminary bad suggested donation targets." (http://effective-altruism.com/ea/16g/anonymous_comments/)
I expect average OPP posts take over 5 hours, and 5 hours might be an underestimate of the amount of time it would take for a useful overview without prior subject knowledge. But both that comment and the OPP seem to...
I'd second that - it's not the most wieldy text editor. Not sure how easy it would be to remedy. Going into the HTML gets you what you want in the end, but it's undue effort.
Hi Tom,
Great to hear that it's been suggested. By the looks of it, it may be an area better suited to an Open Philanthropy Project-style approach, being primarily a question of policy and having a sparser evidence base and impact definition difficulties. I styled my analysis around OPP's approach (with some obvious shortcomings on my part).
I could have done better in the analysis to distinguish between the various types of pain. As you say, they are not trivial distinctions, especially when it comes to treatment with opioids.
I'd be interested to hear your take on the impact of pain control on the nature of medicine and the doctor-patient dynamic. What trends are you concerned about hastening exactly?
Thanks for those links. It's troubling to hear about some of the promotional techniques described, though I can't say it's surprising.
While US regulations have been developed decades before their equivalents in many developing countries, it's not necessarily a mark of quality. In the article I refer to less desirable idiosyncrasies of the US health system (i.e. aspects of the consumer-based model; pain as a fifth vital sign), which have exacerbated the crisis there and will not necessarily exist in some developing countries. Yet, while I hesitate to paint...
Hi Austen,
Just to clarify, I'm not trying to promote or demote the cause. I'm aware that the cause is of interest to some EAs, and as someone in a good position to inform them, I thought something like this would help them make their own judgement :) I'm just sharing info and trying to be impartial.
Sorry if I my comments gave the impression that I thought it was low priority and financially inefficient. To reiterate I've withheld strong judgement on its priority, and I said I haven't looked into its financial efficiency compared with other interventions....
Hi Austen,
Thanks for all your interest!
I would have to disagree on your point about corporate influence. Pharma has been implicated heavily in the current opioid epidemic in the States and elsewhere. See the John Oliver expose for a light introduction (link above). In this area, if anything, there is even more reason to be wary of pharma influence because the product is so addictive when misused. Pharma does do some positive work - I'm aware of a BMS-funded training hospice in Romania (Casa Sperantei). I've only heard good things about it.
You've hit on ...
Hi Elizabeth,
I focus on opioid medications for the same reasons that I don't focus on cannabinoids:
There isn't strong expert consensus on the effectiveness of cannabinoids. This may change as the search for alternative drugs, particularly for chronic pain, intensifies. While there are some areas that will likely see their use increase (you justly highlight neuropathic pain), my understanding is that current evidence doesn't reliably indicate their effectiveness for severe pain. All this said, there are good reasons to believe they are understudied, both
Thanks Austen!
Yes, it's actually very large. So large, in fact, that it seems to be taken for granted by many people in those countries with low access.
I've withheld strong judgement on whether it should be a cause area that other EAs should act on. I think it could be a particularly attractive area for EAs with certain ethical preferences.
Before funding programmes such as PPSG's, further analyses of the cause and the programme(s) are warranted. I'd be open to suggestions on how to carry those out from anyone with experience, or I'd be happy to discuss the matter with anyone interested in taking it forward themselves.
This link is expired unfortunately. Is there anything CEA/the forum could do to collate existing translations?
Indeed. And essay competitions are not like examinations; plagiarism only needs to be detected in potential winners and can be achieved by googling fragments of the essays.
Yes. Thanks. Link has been amended. Author was in fact Luke Muehlhauser, so labeling it 'WEF' is only partially accurate.