AL

Algo_Law

336 karmaJoined Newcastle upon Tyne, UK

Bio

BSc (Hons) Computer Science with Artificial Intelligence

Master of Laws (LLM) Space Law

PhD Candidate (Law) - Title: "AI and Machine Learning Nascent Visual Biometrics in Police Intelligence and Criminal Evidence – Impacts on Reliability and Fairness"

I currently work in the criminal justice system of England & Wales, as well as researching my PhD. My academic history in AI and in Law have resulted in an avid interest in all things AI Law (esp criminal law and human rights law) and its value to Longtermist principles. If you ever want to chat about the topic at all, please feel free to pop me a message:)

Comments
50

"I think the second view is basically correct for policy in general, although I don't have a strong view yet of how it applies to AI governance specifically. One thing that's become clear to me as I've gotten more involved in institution-focused work and research is that large governments and other similarly impactful organizations are huge, sprawling social organisms, such that I think EAs simultaneously underestimate and overestimate the amount of influence that's possible in those settings."

 

This is a problem I've spoken often about, and I'm currently writing an essay on for this forum based on some research I co-authored. 

People wildly underestimate how hard it is to not only pass governance, but make sure it is abided to, and to balance the various stakeholders that are required. The AI Governance field has a massive sociological, socio-legal, and even ops-experience gap that means a lot of very good policy and governance ideas die in their infancy because no-one who wrote them have any idea how to enact them feasibly. My PhD is on the governance end of this and I do a bunch of work within government AI policy, and I see a lot of very good governance pitches go splat against the complex, ever-shifting beast that is the human organisation purely because the researchers never thought to consult a sociologist, or incorporate any socio-legal research methods.

Good post, thank you.

"or other such nonsense that advocates never taking on risks even when the benefits clearly dominate"

An important point to note here - the people who suffer the risks and the people who reap the benefits are very rarely the same group. Deciding to use an unsafe AI system (whether presently or in the far future) using a risks/benefits analysis goes wrong so often because one man's risk is another's benefit.

Example: The risk of lung damage from traditional coal mining compared to the industrial value of the coal is a very different risk/reward analysis for the miner and the mine owner. Same with AI.

This would be an interesting approach to generate charitable donations. I would caution though that this seems to me (a non-USA person, so take this with a grain of salt) to skirt a little close to some laws surrounding this so I'd definitely check that out first. One man's charitable fundraising could be another man's false representation!

Still though, interesting thought :)

This is helpful. My entire career revolves around "conceal, but don't mislead" and even I'm still learning where lines are. Thank you for this post.

This was a great event which I followed very closely indeed. It generated so much interesting exploration of different areas and I learned so much about the world I live in.

I just want to add that the idea of having 'good-faith' submission prizes was a fantastic addition, and really helped level the playing field for people who otherwise might not have been able to contribute.  I heard from a couple of people that they may not have been able to submit without them. I'd love to see more of these in similar contests in future.

I understand there are some people in the early stages of exploring this, though I'm sorry for the life of me I can't remember who. The Law and Longtermism slack channel which is run by the Legal Priorities Project and may be a good starting point, as I understand some people have found people for this there before.

You raise some fair points, but some others I would disagree with. I would say that just because there isn't a popular argument that AGI risk affects underpriviliged people the most, doesn't make it not true.  I can't think of a transformative technology in human history that didn't impact people more the lower down the social strata you go, and AI thus far has not only followed this trend but greatly exaccerbated it. Current AI harms are overwhelmingly targetted towards these groups. I can't think of any reason why much more powerful AI such as AGI would for whatever reason buck this trend. Obviously if we only focused on existential risk this may not be the case, but even a marginally misaligned AGI would exaggerate current AI harms, particularly in suffering ethics cases.
 

Good points. An important point to bear in mind though is that once again well-roundedness, volunteer work, hobbies etc are all related to factors apart from motivation/ability. Generally, people from wealthier backgrounds have much more of these on their resume than people from poorer backgrounds because they could afford to take part in the hobbies, could afford to work for free, etc. Lots of supposedly 'academic filtering' is actually just socioeconomic filtering with extra steps.

Great post! I'm gonna throw out two spicy takes.

Firstly, I don't think it's so much that people don't care about AI Safety, I think it's largely who cares about a threat is highly related to who it affects. Natural disasters etc affect everyone relatively (though not exactly) equally, whereas AI harms overwhelmingly affect the underpriviliged and vulnerable. People who are vastly underrepresented in both EA and in wider STEM/academia, who are less able to collate and utilise resources, who are less able to raise alarms. As a result, AI Safety is a field where many of the current and future threats are hidden from view.

Secondly, AGI Safety as a field tends to isolate itself from other areas of AI Safety as if the two aren't massively related, and goes off on kind of a majorly theoretical angle as a result. As a consequence, AGI/ASI Safety folk are seen as something of living in a fantasy world of their own making compared to lots of other areas of AI risk by both the public and people within AI. I don't personally agree with this, but it's something I hear a lot in AI research.

This would be very helpful. It's often confusing for the applicant as they have no idea what to change/work on. For me, I've been rejected by every EA fellowship I've ever applied for (woop woop, highscore) but I don't know how to improve. Twice orgs have legit emailed me out the blue saying they like my blog/forum content and asking me to apply and then rejected me. I have no idea what stage I failed at. Was my application poorly written? Were my research suggestions poor? Is it a CV issue? Am I over or under qualified? Who knows. Certainly not me. So I'm sat here, still shooting off applications, with no idea which part is letting me down. I sometimes get "your application was really strong but unfortunately..." in the rejection email, but I'm never sure whether that's being nice or actual feedback.

They say it's expensive or time consuming to give feedback, and that's a fair comment, but compared to the possible upside I think it's a sound investment. I've collaborated with a bunch of really talented EAs who gave up applying to fellowships because of this. Deadlines are often extended because they want more applications, but maybe they'd get more applications if their attrition rates were lowered by giving people (especially early careers people) an idea of what areas they need to work on.

You can ask third parties to review your applications, but really only the orgs themselves know why something was rejected.

Load more