S

supesanon

-21 karmaJoined Nov 2022

Comments
19

Very nice post! I'm not sure if you have looked into this but market aside, given that people in EA believe in claims about AI risk and short timelines, are charities in EA spending money in proportion to the seriousness the EA community seems to take short AI timelines and AI x-risk? For example you cited some reports from Open Philanthropy like Bio Anchors where you extracted some of the probabilities used in your calculation. Do you think Open Phil's spending is in line with the expected timelines suggested by Bio Anchors?

To be clear I was making an analogy of what the claims made look like and not saying that it is written explicitly that way. I see implicit claims of omnipotence and omniscience of a super intelligent AI from the very first (link)[https://intelligence.org/2015/07/24/four-background-claims/] in the curriculum. These claims 2-4 of that link are just beliefs not testable hypotheses that can be proven or disproven through scientific inquiry.

There have been loads of arguments offered on the forum and through other sources like books, articles on other websites, podcasts, interviews, papers etc. So I don't think that what's lacking are arguments or evidence. I think the issue is the mentality some people in EA have when it comes to AI. Are people who are waiting for people to bring them arguments to convince them of something really interested in getting different perspectives? Why not just go look for differing perspectives yourself? This is a known human characteristic, if someone really wants to believe in something they can believe it even to their own detriment and will not seek out information that may contradict with their beliefs (I was fascinated by the tales of COVID patients denying that COVID exists even when dying from it in an ICU). I witnessed this lack of curiosity in my own cohort that completed AGISF. We had more questions than answers at the end of the course and never really settled anything during our meetings other than minor definitions here and there but despite that, some of the folks in my cohort went on to work or try work on AI safety and solicit funding without either learning more about AI itself(some of them didn't have much of a technical background) or trying to clarify their confusion and understanding of the arguments. I also know another fellow from the same run of AGISF who got funding as an AI safety researcher when they knew so little about how AI actually works. They are all very nice amicable people and despite all the conversations I've had with them they don't seem open to the idea of changing their beliefs even when there are a lot of holes in the positions they have and you directly point out those holes to them. In what other contexts are people not open to the idea of changing their beliefs other than in religious or other superstitious contexts? Well the other case I can think of is when having a certain belief is tied to having an income, reputation or something else that is valuable to a person. This is why the conflict of interest at the source of funding pushing a certain belief is so pernicious because it really can affect beliefs downstream.

Yup very sure. AGI Safety Fundamentals by Cambridge.

I took that course and gave EA a benefit of the doubt. I was exposed to arguments about AI safety before I knew much about AI and it was very confusing stuff and there is a lot that didn't add up but I still gave the EA take the benefit of the doubt since I didn't know much about AI and thought that there was something that I just didn't understand. I then spent a lot of time actually learning about AI and trying to understand what experts in the field think about what AI can actually do and what lines of research they are pursuing. Suffice it to say that the material on AGI safety didn't hold up well after this process. The AI x-risk concerns seem very quasi religious. The story is that man will create an omnipresent, omniscient and omnipotent being. Such beings are known as God in religious contexts. More moderate claims have that a being or a multiplicity of them that possess at least one of these characteristics will be created which is more akin to gods in polytheistic religions. This being will then rain down fire and brimstone on humanity for the original sin of being imperfect which is manifested by specification of an imperfect goal. It's very similar to religious creation stories but with the role of creator reversed but the outcome is the same, Armageddon. Given that the current prophecy seems to indicate that the apocalypse will come by 2030 it seems like there is opportunity for a research study to be done on EA similar to that done of the Seekers. Given this looks very much like a religious belief I doubt there is any type of argumentation that will convince the devout adherents of the ideology of the incredulity of their beliefs. There will also be a selection bias towards people who are prone to this kind of ideological beliefs similar to how some people are just prone to conspiracy theories like QAnon albeit that AI x-risk is a lot more sophisticated. At least the people who believe in mainstream religions are upfront that their beliefs are based on faith. The AI x-risk devotees also base their beliefs on faith but its couched in incomprehensible rationality newspeak, philosophy, absurd extrapolations and theoretical mathematical abstractions that cannot be realized in practical physical systems to give the illusion that it's more than that.

I think they would have to believe there is a risk but they are actually just trying to figure out how to make headway on basic issues. The point of my comment was not to argue about AI risk since I think that is a waste of time as those who believe in it seem to hold it more like an ideological/religious belief and I don't think there is any amount of argumentation or evidence that can convince them(there is also a lot of material online where the top researchers are interviewed and talk about some of these issues for anyone actually interested about what the state of AI is outside the EA bubble). My intention was just to name that there is a conflict of interest in this particular domain that is having a lot of influence in the community and I doubt there will be much done about it.

There is a difference. For ML engineers they actually have to follow up their claims by making products that actually work and earn revenue or successfully convince a VC to keep funding their ventures. The source of funding and the ones appealing for the funding have different interests. In this regard ML engineers have more of an incentive to try upsell the capabilities of their products than downplay them. It's still possible for someone to burn their money funding something that won't pan out and this is the risk investors have to make (I don't know of any top VCs as bullish on AI capabilities on as aggressive timelines as EA folks). In the case of AI safety some of the folks who are in charge of the funding are the ones who are also the loudest advocates for the cause as well as some of the leading researchers. The source of funding and the ones utilizing the funding are comingled in a way that would lead to a conflict of interest that seems quite more problematic than I've noticed in other cause areas. But if such serious conflicts do exist, then those too are a problem and not an excuse to ignore conflicts of interest.

Load more