Robi Rahman

296Boston, MA, USAJoined Aug 2021

Bio

Data science graduate student. Member of Effective Altruism DC and Harvard EA.

Comments
57

Muy contento de que la comunidad latinoamericana tenga una conferencia. ¡Me encantaría atender!

even the most invested male partner I could ever find would not be able to deliver the kind of childcare or domestic skills that women are capable of delivering

As a pro-natalist myself, I'm really curious about this remark. What aspect of childcare are men not capable of delivering? Is it just that they generally don't know domestic skills or is there something you think we can't learn as well as women?

A paperclip maximiser and a pencil maximiser cannot “agree to disagree”. One of them will get to tile the universe with their chosen stationery implement, and one of them will be destroyed. They are mortal enemies with each other, and both of them are mortal enemies of the stapler maximiser, and the eraser maximiser, and so on. Even a different paperclip maximiser is the enemy, if their designs are different. The plastic paperclipper and the metal paperclipper must, sooner or later, battle to the death. 

The inevitable result of a world with lots of different malevolent AGI’s is a bare-knuckle, vicious, battle royale to the death between every intelligent entity. In the end, only one goal can win.  

Are you familiar with the concept of values handshakes? An AI programmed to maximize red paperclips and an AI programmed to maximize blue paperclips and who know that each would prefer to destroy each other might instead agree on some intermediate goal based on their relative power and initial utility functions, e.g. they agree to maximize purple paperclips together, or tile the universe with 70% red paperclips and 30% blue paperclips.

I'm one of the AI researchers worried about fast takeoff. Yes, it's probably incorrect to pick any particular sudden-death scenario and say it's how it'll happen, but you can provide some guesses and a better illustration of one or more possibilities. For example, have you read Valuable Humans In Transit? https://qntm.org/transit

This profile by 80k is pretty bad in terms of just glossing over all the intermediate steps and reducing it all to "But one day, every single person in the world suddenly dies."

Universal Paperclips is slightly better about this, showing the process of the AI gaining our trust before betraying us, but the key power-grab step is still reduced to just "release the hypnodrones".

There are other places that have fleshed out the details of how misaligned power-seeking might play out, such as Holden Karnofsky's post AI Could Defeat All Of Us Combined.

I heard CEA offered them $10k and they refused to sell it.

In what ways are EAGx events weirder?

I'm slapping myself on the forehead for not thinking of this earlier, especially after seeing what happened to ea.org. We should do this for other cause areas too. And some funder should give you a retroactive grant for this or buy the domains from you.

You think there's an x-risk more urgent than AI? What could be? Nanotech isn't going to be invented within 20 years, there aren't any asteroids about to hit the earth, climate tail risks only come into effect next century, deadly pandemics or supervolcanic eruptions are inevitable on long timescales but aren't common enough to be the top source of risk in the time until AGI is invented. The only way anything is more risky than AI within 50 years is if you expect something like a major war leading to usage of enough nuclear or biological weapons that everyone dies, and I really doubt that's more than 10% likely in the next half century.

  1. No argument about AGI risk that I've seen argues that it affects the underprivileged most. In fact, arguments emphasize how every single one of us is vulnerable to AI and that AI takeover would be a catastrophe for all of humanity. There is no story in which misaligned AI only hurts poor/vulnerable people.

You're misunderstanding something about why many people are not concerned with AGI risks despite being sympathetic to various aspects of AI ethics. No one concerned with AGI x-risk is arguing it will disproportionately harm the underprivileged. But current AI harms are from things like discriminatory criminal sentencing algorithms, so a lot of the AI ethics discourse involves fairness and privilege, and people concerned with those issues don't fully appreciate that misaligned AGI 1) hurts everyone, and 2) is a real thing that very well might happen within 20 years, not just some imaginary sci-fi story made up by overprivileged white nerds.

There is some discourse around technological unemployment putting low-skilled employees out of work, but this is a niche political argument that I've mostly heard of proponents of UBI. I think it's less critical than x-risk, and if artificial intelligence gains the ability to do diverse tasks as well as humans can, I'll be just as unemployed a computer programmer as anyone else is as a coal miner.

Load More