Flodorner

Flodorner's Posts

Sorted by New

Flodorner's Comments

EA considerations regarding increasing political polarization

Claims that people are "unabashed racists and sexists" should at least be backed up with actual examples. Like this, I cannot know whether you have good reasons for that believe that I don't see (to the very least not in all of the cases), or whether we have the same information but fundamentally disagree about what constitutes "unabashed racism".

I agree with the feeling that the post undersells concerns about the right wing, but I don't think you will convince anybody without any arguments except for a weakly supported claim that the concern about the left is overblown. I also agree that "both sides are equal" is rarely true, but again just claiming that does not show anyone that the side you prefer is better (see that comment where someone essentially argues the same for the other side; Imagine I haven't thought about this topic before, how am I supposed to choose whom of you two to listen to?).

"If you would like to avoid being deplatformed or called out, perhaps the best advice is to simply not make bigoted statements. That certainly seems easier than fleeing to another country." The author seems to be arguing that it might make sense to be prepared to flee the country if things become a lot worse than deplatforming. While I think that the likelihood of this happening is fairly small (although this course of action would be equally advisable if things got a lot worse on the right wing), they are clearly not advocating to leave the country in order to avoid being "called out".

Lastly, I sincerely hope that all of the downvotes are for failing to comply with the commenting guidlines of "Aim to explain, not persuade, Try to be clear, on-topic, and kind; and Approach disagreements with curiosity" and not because of your opinions.

EA considerations regarding increasing political polarization

"While Trump’s policies are in some ways more moderate than the traditional Republican platform". I do not find this claim self-evident (potentially due to biased media reporting affecting my views) and find it strange that no source or evidence for it is provided, especially given the commendable general amount of links and sources in the text.

Relatedly, I noticed a gut feeling that the text seems more charitable to the right-wing perspective than to the left (specific "evidence" included the statement from the previous paragraph, the use of the word "mob", the use of concrete examples for the wrongdoings of the left while mostly talking about hypotheticals for the right and the focus on the cultural revolution without providing arguments why parallels to previous right-wing takeovers [especially against the backdrop of a perceived left-wing threat] are not adequate). The recommendation of eastern europe as good destination for migration seems to push in a similar vein, given recent drifts towards right wing authoritarianism in states like poland and hungary.

I would be curious if others (especially people whose political instincts don't kick in when thinking about the discussion around deplatforming) share this impression to get a better sense of how much politics distorts how I viscerally weigh evidence.

I am also confused whether pieces that can easily be read in a way that is explicitly anti-left wing (If I, who is quite sceptical of deplatforming but might not see it is as a huge threat can do this, imagine someone who is further to the left) rather than mostly orthogonal to politics (with the occasional statement that can be misconstrued as right-wing) might make it even easier for EA to "get labelled as right-wing or counter-revolutionary and lose status among left-wing academia and media outlets.". If that was the case, one would have to carefully weigh the likelihood that these texts will prevent extreme political outcomes and the added risk of getting caught in the crossfire. (Of course, there are also second order like the effect of potential self-censorship that might very well play a relevant role).

Similar considerations go for mass-downvoting comments pushing against texts like this [in a way that most likely violates community norms but is unlikely to be trolling], without anyone explaining why.

EA considerations regarding increasing political polarization

If you go by GDP per capita, most of europe is behind the US but ahead of most of Asia https://en.wikipedia.org/wiki/List_of_countries_by_GDP_(nominal)_per_capita (growth rates in Asia are higher though, so this might change at some point in the future.)

In terms of the Human Develompment Index https://en.wikipedia.org/wiki/List_of_countries_by_Human_Development_Index (which seems like a better measure of "success" than just GDP), some countries (including large ones like Germany and the UK) score above the US but others score lower. Most of Asia (except for Singapore, Hong Kong and Japan) scores lower.

For the military aspect, it kind of depends on what you mean by "failed"? Europe is clearly not as militarily capable as the US, but it also seems quite questionable whether spending as much as the US on military capabilities is a good choice, especially for allies of the US who also possess (or are strongly connected with) other countries who possess nuclear deterrence.

Critical Review of 'The Precipice': A Reassessment of the Risks of AI and Pandemics

While I am unsure about how good of an idea it is to map out more plausible scenarios for existential risk from pathogens, I agree with the sentiment that the top level post seems seems to focus too narrowly on a specific scenario.

Biases in our estimates of Scale, Neglectedness and Solvability?

Re bonus section: Note that we are (hopefully) taking expectations over our estimates for importance, neglectedness and tractability, such that general correlations between the factors between causes do not necessarily cause a problem. However, it seems quite plausible that our estimation errors are often correlated because of things like the halo effect.

Edit: I do not fully endorse this comment any more, but still belief that the way we model the estimation procedure matters here. Will edit again, once I am less confused.

Implications of Quantum Computing for Artificial Intelligence alignment research (ABRIDGED)

Maybe having a good understanding of Quantum Computing and how it could be leveraged in different paradigms of ML might help with forecasting AI-timelines as well as dominant paradigms, to some extend?

If that was true, while not necessarily helpful for a single agenda, knowledge about quantum computing would help with the correct prioritization of different agendas.

Three Biases That Made Me Believe in AI Risk

"The combination of these vastly different expressions of scale together with anchoring makes that we should expect people to over-estimate the probability of unlikely risks and hence to over-estimate the expected utility of x-risk prevention measures. "

I am not entirely sure whether i understand this point. Is the argument that the anchoring effect would cause an overestimation, because the "perceived distance" from an anchor grows faster per added zero than per increase of one to the exponent?

Critique of Superintelligence Part 2

Directly relevant quotes from the articles for easier reference:

Paul Christiano:

"This story seems consistent with the historical record. Things are usually preceded by worse versions, even in cases where there are weak reasons to expect a discontinuous jump. The best counterexample is probably nuclear weapons. But in that case there were several very strong reasons for discontinuity: physics has an inherent gap between chemical and nuclear energy density, nuclear chain reactions require a large minimum scale, and the dynamics of war are very sensitive to energy density."

"I’m not aware of many historical examples of this phenomenon (and no really good examples)—to the extent that there have been “key insights” needed to make something important work, the first version of the insight has almost always either been discovered long before it was needed, or discovered in a preliminary and weak version which is then iteratively improved over a long time period. "

"Over the course of training, ML systems typically go quite quickly from “really lame” to “really awesome”—over the timescale of days, not months or years.

But the training curve seems almost irrelevant to takeoff speeds. The question is: how much better is your AGI then the AGI that you were able to train 6 months ago?"

AIImpacts:

"Discontinuities larger than around ten years of past progress in one advance seem to be rare in technological progress on natural and desirable metrics. We have verified around five examples, and know of several other likely cases, though have not completed this investigation. "

"Supposing that AlphaZero did represent discontinuity on playing multiple games using the same system, there remains a question of whether that is a metric of sufficient interest to anyone that effort has been put into it. We have not investigated this.

Whether or not this case represents a large discontinuity, if it is the only one among recent progress on a large number of fronts, it is not clear that this raises the expectation of discontinuities in AI very much, and in particular does not seem to suggest discontinuity should be expected in any other specific place."

"We have not investigated the claims this argument is premised on, or examined other AI progress especially closely for discontinuities."

Critique of Superintelligence Part 2

Another point against the content overhang argument: While more data is definitely useful, it is not clear, whether raw data about a world without a particular agent in it will be similarly useful to this agent as data obtained from its own (or that of sufficiently similar agents) interaction with the world. Depending on the actual implementation of a possible superintelligence, this raw data might be marginally helpful but far from being the most relevant bottleneck.

"Bostrom is simply making an assumption that such rapid rates of progress could occur. His intelligence spectrum argument can only ever show that the relative distance in intelligence space is small; it is silent with respect to likely development timespans. "

It is not completely silent. I would expect any meaningful measure for distance in intelligence space to at least somewhat correlate with timespans necessary to bridge that distance. So while the argument is not a decisive one regarding time spans, it also seems far from saying nothing.

"As such it seems patently absurd to argue that developments of this magnitude could be made on the timespan of days or weeks. We simply see no examples of anything like this from history, and Bostrom cannot argue that the existence of superintelligence would make historical parallels irrelevant, since we are precisely talking about the development of superintelligence in the context of it not already being in existence. "

Note that the argument from historical parallels is extremely sensitive to reference class. While it seems like there has not been "anything like this" in science or engineering (although progress seems to have been quite discontinous (but not self-reinforcing) by some metrics at times) or related to general intelligence (here it would be interesting to explore, whether or not the evolution of human intelligence happened a lot faster than an outside observer would have expected from looking at the evolution of other animals, since hours and weeks seem like a somewhat Anthropocentric frame of reference), narrow AI has gone from sub- to superhuman level in quite small time spans a lot recently (this is once again very sensitive to framing, so take it more as a point for the complexity of aruments from historical parallels, than as a direct argument for fast take-offs being likely).

"not consistent either with the slow but steady rate of progress in artificial intelligence research over the past 60 years"

Could you elaborate? I'm not extremely familiar with the history of artificial intelligence, but my impression was, that progress was quite jumpy at times, instead of slow and steady.

Critique of Superintelligence Part 1

Thanks for writing this!

I think you are pointing out some important imprecisions, but i think that some of your arguments aren't as conclusive as you seem to present them to be:

"Bostrom therefore faces a dilemma. If intelligence is a mix of a wide range of distinct abilities as in Intelligence(1), there is no reason to think it can be ‘increased’ in the rapidly self-reinforcing way Bostrom speaks about (in mathematical terms, there is no single variable  which we can differentiate and plug into the differential equation, as Bostrom does in his example on pages 75-76). "

Those variables could be reinforcing each other, as one could argue they had done in the evolution of human intelligence. (in mathematical terms, there is a runaway dynamic similar to the one dimensional case for a linear vector-valued differential equation, as long as all eigenvalues are positive).

"This should become clear if one considers that ‘essentially all human cognitive abilities’ includes such activities as pondering moral dilemmas, reflecting on the meaning of life, analysing and producing sophisticated literature, formulating arguments about what constitutes a ‘good life’, interpreting and writing poetry, forming social connections with others, and critically introspecting upon one’s own goals and desires. To me it seems extraordinarily unlikely that any agent capable of performing all these tasks with a high degree of proficiency would simultaneously stand firm in its conviction that the only goal it had reasons to pursue was tilling the universe with paperclips. To me it seems extraordinarily unlikely that any agent capable of performing all these tasks with a high degree of proficiency would simultaneously stand firm in its conviction that the only goal it had reasons to pursue was tilling the universe with paperclips."

Why does it seem unlikely? Also, do you mean unlikely as in "agents emerging in a world similar to ours is nowprobably won't have this property" or as in "given that someone figured out how to construct a great variety of superintelligent agents, she would still have trouble constructing an agent with this property?"

Load More