Sorry for the late reply, we read "What We Owe The Future" by William MacAskill. Thanks you for the recommendation
We are at a critical time as we stand; either we have the Board yielding to the plea/threat of the worker or we have inexperienced actors being at the helm of the driving force in AI. What do you think organizations like EA can do in this regard, should we just sit and watch or should we regard the threat as non-existent because to me, having this sort of people managing the AI space is a ticking time bomb
I think what matters here is having a kill switch or some set of parameters like [if <situation> occurs, kill] or some sort of limitation to the purview or what a particular model can undertake. If we keep churning out models trained in a general way, there is a high probability of running riot one day but if there are limitations to what they can do, which unfortunately will be undermining the reason we deploy AI in the first place but at is stands now, we need something urgent to keep this existential risk at bay or perhaps it's our paranoia running riot... Perhaps not.
Hi everyone, I am Adebayo Mubarak, I am currently undertaking a law degree. I have taken the Intro to EA, In-Depth, The Precipice Reading and currently taking the Legal Topics in EA which will come to an end by next week. Also, I am the coordinator of my University reading group (Bayero University, Kano) and the Lead Facilitator for EA Kano Hub Nigeria.
I am open to opportunities to further widen my knowledge of the core principles and cause area in EA. And lastly, I am a freelance Writer.