I believe your assessment is correct, and I fear that EA hasn't done due diligence on AI Safety, especially seeing how much effort and money is being spent on it.
I think there is a severe lack of writing on the side of "AI Safety is ineffective". A lot of basic arguments haven't been written down, including some quite low-hanging fruit.
As per my initial comment, I'd compare it to pre-WWII Netherlands banning government registration of religion. It could safe tens of thousands of people from deportation and murder.
For a more extreme hypothesis, Ariel Conn at FLI has voiced the omnipresent Western fear of resurgent ethnic cleansing, citing the ease of facial recognition of people's race - but has that ever been the main obstacle to genocide? Moreover, the idea of thoughtless machines dutifully carrying out a campaign of mass murder takes a rather lopsided view of the history of ethnic cleansing and genocide, where the real death and suffering is not mitigated by the presence of humans in the loop more often than it is caused or exacerbated by human passions, gri...
I don't have any specific instances in mind.
Regarding your accounting of cases, that was roughly my recollection as well. But while the posts might not address the second concern directly, I don't think that the two concerns are separable. The actual mechanisms and results might largely overlap.
Regarding the second concern you mention specifically, I would not expect those complaints to be written down by any users. Most people on any forum are lurkers, or at the very least they will lurk a bit to get a feel for what the community is like and wha...
Are there any plans to evaluate the current karma system? Both the OP and multiple comments expressed worries about the announced scoring system, and in the present day we regularly see people complain about voting behaviour. It would be worth knowing if the concerns from a year ago turn out to have been correct.
Related to this, I have a feature request. Would it be possible to break down scores in a more transparent way, for example by number of upvotes and downvotes? The current system gives very little insight to authors about how much people like their...
Are there particular instances of complaints related to voting behavior that you can recall?
I remember seeing a couple of cases over the last ~8 months where users were concerned about low-information downvotes (people downvoting without explaining what they didn't like). I don't remember seeing any instances of concern around other aspects of the current system (for example, complaints about high-karma users dominating the perception of posts by strong-voting too frequently). However, I could easily be forgetting or missing comments along those...
Thank you so much for posting this. It is nice to see others in our community willing to call it like it is.
I was talking with a colleague the other day about an AI organization that claims:
AGI is probably coming in the next 20 years.
Many of the reasons we have for believing this are secret.
They're secret because if we told people about those reasons, they'd learn things that would let them make an AGI even sooner than they would otherwise.
To be fair to MIRI (who I'm guessing are the organization in question), this lie is industry standard e...
This seems like selective presentation of the evidence. You haven't talked about AlphaZero or generative adversarial networks, for instance.
Not just in how any data-based algorithm engineering is 80% data cleaning while everyone pretends the power is in having clever algorithms
80% by what metric? Is your claim that Facebook could find your face in a photo using logistic regression if it had enough clean data? (If so, can you show me a peer-reviewed paper supporting this claim?)
Presumably you are saying something like: "80% of the human labor w...
This is mostly a problem with an example you use. I'm not sure whether it points to an underlying issue of your premise:
You link to the exponential growth of transistor density. But that growth is really restricted to just that: transistor density. Growing your number of transistors doesn't necessarily grow your capability to compute things you care about, both from a theoretical perspective (potential fundamental limits in the theory of computation) as well as a practical perspective (our general inability to write code that makes use of much ci...
These are some issues that actively frustrate me to the point of driving me away from this site.
Sure it is, but I know a lot more about myself than I do about other people. I could make a good guess on impact on myself of a worse guess on impact on others. It's a bias/variance trade-off of sorts.
I'd say the two are valuable in different ways, not that one is necessarily better than the other.
Any technology comes with its own rights struggle. Universal access to super-longevity, the issue of allowing birth vs exploding overpopulation if everyone were to live many times longer, em rights, just to name a few. New tech will hardly have any positive effect if these social issues resolve in a wrong way.
Can you make a case as to why the two have enough notability separately to deserve their own separate Wikipedia pages?
The original book was well received and got significant amounts of attention (e.g. an excerpt ran in the NYT, Peter was on the Colbert Report to talk about it, etc.). It was also highly influential, and has contributed to the way a lot of EAs (including Cari Tuna) think about giving. I’m not sure how many languages it’s been translated into, but it’s a pretty good number.
The organization has also received attention from a variety of major media outlets and has moved a considerable amount of money to effective charities (~$5.25 million in 2018 and expected
...Regarding 1), if I were to guess which events of the past 100 years made the most positive impact on my life today, I'd say those are the defeat of the Nazis, the long peace, trans rights and women's rights. Each of those carries a major socio-political dimension, and the last two arguably didn't require any technological progress.
I very much think that socio-political reform and institutional change are more important for positive long-term change than technology. Would you say that my view is not empirically grounded?
it reflects a sentiment that effective altruism is not about one thing, about having the right politics, about saying the right things, about adopting groupthink, or any of the many other things we associate with ideology.
Can you expand a bit on this statement? I don't see how you can say only other ideologies of being full of groupthink and having the right politics, even though most posts on the EA forum that don't agree with the ideological tennets listed in the OP tends to get heavily downvoted. When I personally try to advocate against th...
I don't see how you can say only other ideologies of being full of groupthink and having the right politics, even though most posts on the EA forum that don't agree with the ideological tennets listed in the OP tends to get heavily downvoted.
This post of yours is at +28. The most upvoted comment is a request to see more stuff from you. If EA was an ideology, I would expect to see your post at a 0 or negative score.
There's no shortage of subreddits where stuff that goes against community beliefs rarely scores above 0. I would guess most su...
It is most apparent in this piece of the review:
He also points out that Tanzanian natives using their traditional farming practices were more productive than European colonists using scientific farming. I’ve had to listen to so many people talk about how “we must respect native people’s different ways of knowing” and “native agriculturalists have a profound respect for the earth that goes beyond logocentric Western ideals” and nobody had ever bothered to tell me before that they actually produced more crops per acre, at least some of the time. That w...
For a different take on the consequences of being "rational", I would highly recommend James C. Scott's book Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed. The book summary of SSC is pretty good, but when he gives his opinion on the book he seems to have missed the point of the book entirely.
Thank you for your response.
Yes, that is what I meant. If you could convince me that AGI Safety were solvable with increased funding, and only solvable with increased funding, that would go a long way in convincing me of it being an effective cause.
In response to your question of giving up: If AGI were a long way off from being built, then helping others now is still a useful thing to do, no matter if either of the scenarios you describe were to happen. Sure, extinction would be bad, but at least from some person-affecting viewpoints I'd say extinction is not worse than existing animal agriculture.
Let me try to rephrase this part, as I consider it to be the main part of my argument and it doesn't look like I managed to convey what I intended to:
AI Safety would be a worthy cause if a superintelligence were powerful and dangerous enough to be an issue but not so powerful and dangerous as to be uncontrollable.
The most popular cause evaluation framework within EA seems to be Importance/Neglectedness/Tractability. AI Safety enthusiasts tell a convincing story on importance and neglectedness being good and make an effort at arguing that tractability ...
Thank you for this nice summary of the argument in favour of AI Safety as a cause. I am not convinced, but I appreciate your write-up. As you asked for counterarguments, I'll try to describe some of my gripes with the AI Safety field. Some have to do with how there seems to be little awareness of results in adjacent fields, making me doubt if any of it would stand up to scrutiny from people more knowledgeable in those areas. There are also a number of issues I have with the argument itself.
Where’s does it end? Well, eventually, at the theoretical limi...
ahead of their time, in the sense that if they hadn't been made by their particular discoverer, they wouldn't have been found for a long time afterwards?
This definition is surprisingly weak, and in fact includes some scientific results that were way past their time. One striking example is Morley's trisector theorem, which is an elegant fact in Euclidean 2d geometry which had been overlooked for 2000 years. If not for Morley, this fact might have remained unknown for millennia longer.
1. The mechanics of cryptographic attack and defense are more complicated that you might imagine. This is because (a) there is a huge difference between the attack capabilities of nations versus those of other maligne actors. Even if the NSA, with its highly-skilled staff and big budget, is able to crack your everyday TLS traffic, doesn't mean that your bank transactions aren't safe against petty internet criminals. And (b) state secrets typically need to be safe against computers of 20+ years in the future, as you don't want enemy states to...
I remember EA-aligned vegan Youtuber Unnatural Vegan making a video about this argument last week in response to a recent Vox article. She argues that the meat industry is very elastic, but I don't think she cites any specific sources. As she normally does tend to do that, I suspect those numbers are hard to come by.
3b justifies 3a, as well as that I have a much easier time paying attention to the talk. In video, there is too much temptation to play at 1.5x speed and aim for an approximate understanding. Though I guess watching the video together with other people also helps.
As for 3b, in my experience asking questions adds a lot of value, both for yourself as well as for other audience members. The fact that you have a question is a strong indication that the question is good and that other people are wondering the same thing.
I like your list. Here is my conference advice, contradicting some of yours, based mostly on my experience with academic conferences:
1. Focus on making friends. Of course it would be good to have productive discussions and make useful connections, but it is most important to know some friendly faces and feel comfortable. For me it works best to talk about unrelated things like hobbies, not about work or EA or anything like that.
2. Listening to talks is exhausting, so don't force yourself to attend too many of them. It is fine to pick just the 2-3 most...
The issue is that FLOPS cannot accurately represent computing power across different computing architectures, in particular between single CPUs versus computing clusters. As an example, let's compare 1 computer of 100 MFLOPS with a cluster of 1000 computers of 1 MFLOPS each. The latter option has 10 times as many FLOPS, but there is a wide variety of computational problems in which the former will always be much faster. This means that FLOPS don't meaningfully tell you which option is better, it will always depend on how well the problem you want...
I don't think that 11% figure is correct. It depends on how long you would stay at the company if you would get the job, and on the time you would be unemployed for if the offer were rescinded.
Without commenting on your wider message, I want to pick on two specific factual claims that you are making.
AlphaZero went from a bundle of blank learning algorithms to stronger than the best human chess players in history...in less than two hours.
Training time of the final program is a deeply misleading metric, as these programs have been through endless reruns and tests to get the setup right. I think it is most honest to count total engineering time.
I know people are wary of Kurzweil, but he does seem to be on fairly solid ground here.
Extrapolating FLO...
The EA forum doesn't seem like an obvious best choice. Just because it is related to EA does not make it effective, especially considering the existence of discussion software like Reddit, Discourse, and phpBB.
I'd say it mostly depends on what kind of skills and career capital you are aiming for. There are a number of important (scientific) software packages with either zero or one maintainers, which could be useful to work on either upstream or downstream.
Personally, I am presently just doing (easy) fixes for bugs that I run into myself. But I a...
I used to think pretty much exactly the argument you're describing, so I don't think I will change my mind by discussing this with you in detail.
On the other hand, the last sentence of your comment makes me feel that you're equating my not agreeing with you with my not understanding probability. (I'm talking about my own feelings here, irrespective of what you intended to say.) So, I don't think I will change your mind by discussing this with you in detail.
I don't feel motivated to go back and forth on this thread, because I t...
On the other hand, the last sentence of your comment makes me feel that you're equating my not agreeing with you with my not understanding probability. (I'm talking about my own feelings here, irrespective of what you intended to say.)
Well, OK. But in my last sentence, I wasn't talking about the use of information terminology to refer to probabilities. I'm saying I don't think you have an intuitive grasp of just how mind-bogglingly unlikely a probability like 2^(-30) is. There are other arguments to be made on the math here, b...
Thank you for your response and helpful feedback.
I'm not making any predictions about future cars in the language section. "Self-driving cars" and "pre-driven cars" are the exact same things. I think I'm grasping at a point closer to Clarke's third law, which also doesn't give any obvious falsifiable predictions. My only prediction is that thinking about "self-driving cars" leads to more wrong predictions than thinking about "pre-driven cars".
I changed the sentence you mention to "If you want...
My troubles with this method are two-fold.
1. SHA256 is a hashing-algorithm. Its security is well-vetted for certain kinds of applications and certain kinds of attacks, but "randomly distribute the first 10 hex-digits" is not one of those applications. The post does not include so much as a graph of the distribution of what the past drawing results would have been with this method, so CEA hasn't really justified why the result would be uniformly distributed.
2. The least-significant digits in the IRIS data are probably fungible by adversaries....
I'd like to see some justification for using this approach over the myriad of more responsible ways of generating random draws.
Fighting human rights violations around the globe.