kbog

We truly do live in interesting times

Comments

On AI Weapons

Hi Tommaso,

If I think about the poor record the International Criminal Court has of bringing war criminals to justice, and the fact that the use of cluster bombs in Laos or Agent Orange in Vietnam did not lead to major trials, I am skeptical on whether someone would be hold accountable for crimes committed by LAWs. 

But the issue here is whether responsibility and accountability is handled worse with LAWs as compared with normal killing. You need a reason to be more skeptical for crimes committed by LAWs than you are for crimes not committed by LAWs. That there is so little accountability for crimes committed without LAWs even suggests that we have nothing to lose.

What evidence do we have that international lawmaking follows suit when a lethal technology is developed as the writer assumes it will happen?

I don't think I make such an assumption?  Please remind me (it's been a while since I wrote the essay), you may be mistaking a part where I assume that countries will figure out safety and accountability for their own purposes. They will figure out how to hold people accountable for bad robot weapons just as they hold people accountable for bad equipment and bad human soldiers, for their own purposes without reference to international laws.

However, in order for the comparison to make more sense I would argue that the different examples should be weighted according to the number of victims. 

I would agree if we had a greater sample of large wars, otherwise the figure gets dominated by the Iran-Iraq War, which is doubly worrying because of the wide range of estimates for that conflict. You could exclude it and do a weighted average of the other wars. Either way, seems like civilians are still just a significant minority of victims on average. 

Intuitively to me, the case for LAWs increasing the chance of overseas conflicts such as the Iraq invasion is a very relevant one, because of the magnitude of civilian deaths.

Yes, this would be similar to what I say about the 1991 Gulf War - the conventional war was relatively small but had large indirect costs mostly at civilians. Then, "One issue with this line of reasoning is that it must also be applied to alternative practices besides warfare..." For Iraq in particular, while the 2003 invasion certainly did destabilize it, I also think it's a mistake to think that things would have been decent otherwise (imagine Iraq turning out like Syria in the Arab Spring; Saddam had already committed democide once, he could have done it again if Iraqis acted on their grievances with his regime).

From what the text says I do not see why the conclusion is that banning LAWs would have a neutral effect on the likelihood of overseas wars, given that the texts admits that it is an actual concern.

My 'conclusion' paragraph states it accurately with the clarification of 'conventional conflicts' versus 'overseas counterinsurgency and counterterrorism'

I think the considerations about counterinsurgencies operations being positive for the population is at the very least biased towards favoring Western intervention. 

Well, the critic of AI weapons needs to show that such interventions are negative for the population. My position in this essay was that it's unclear whether they are good or bad. Yes, I didn't give comprehensive arguments in this essay. But since then I've written about these wars in my policy platform where you can see me seriously argue my views, and there I take a more positive stance (my views have shifted a bit in the last year or so). 

The considerations about China and the world order in this section seem simplistic and rely on many assumptions. 

Once more, I got you covered! See my more recent essay here about the pros and cons (predominately cons) of Chinese international power. (Yes it's high time that I rewrote and updated this article)

Why EA groups should not use “Effective Altruism” in their name.

But the answers to a survey like that wouldn't be easy interpret. We should give the same message under organization names to group A and group B and see which group is then more likely to endorse the EA movement or commit to taking a concrete altruistic action.

Objectives of longtermist policy making

No I agree on 2!  I'm just saying even from a longtermist perspective, it may not be as important and tractable as improving institutions in orthogonal ways.

Objectives of longtermist policy making

I think it's really not clear that reforming institutions to be more longtermist has an outsized long run impact compared to many other axes of institutional reform.

We know what constitutes good outcomes in the short run, so if we can design institutions to produce better short run outcomes, that will be beneficial in the long run insofar as those institutions endure into the long run. Institutional changes are inherently long-run.

A love letter to civilian OSINT, and possibilities as a tool in EA

I saw OSINT results frequently during the Second Karabkh War (October 2020). The OSINT evidence of war crimes from that conflict has been adequately recognized and you can find info on that elsewhere. Beyond that, it seems to me that certain things would have gone better if certain locals had been more aware of what OSINT was revealing about the military status of the conflict, as a substitute for government claims and as a supplement to local RUMINT (rumor intelligence). False or uncertain perceptions about the state of a war can be deadly. But there is a language barrier and an online/offline barrier so it is hard to get that intelligence seen and believed by the people who need it.

Beyond that, OSINT might be used to actually influence the military course of conflicts if you can make a serious judgment call of which side deserves help, although this partisan effort wouldn't really fit the spirit of "civilian" OSINT.  Presumably the US and Russia already know the location of each other's missile silos, but if you look for stuff that is less important or something which is part of a conflict between minor groups who lack good intelligence services, then you might produce useful intelligence. For a paramount example of dual use risks, during this war, someone geolocated Armenia's Iskander missile base and shared it on Twitter, and it seems unlikely to me that anyone in Azerbaijan had found it already. I certainly don't think it was responsible of him, and Azerbaijan did not strike the base anyway, but it suggests that there is a real potential to influence conflicts. You also might feed that intelligence to the preferred party secretly rather than openly, though that definitely violates the spirit of civilian OSINT. Regardless, OSINT may indeed shine when it is rushed in the context of an active military conflict where time is of the essence, errors notwithstanding. Everyone likes to makes fun of Reddit for the Boston Bomber incident but to me it seems like the exception that tests the rule. While there were a few OSINT conclusions during the war which struck me as dubious, never did I see evidence that someone's geolocation later turned out to be wrong. 

Also, I don't know if structure and (formal) training are important. Again, you can pick on those Redditors, but lots of other independent open source geeks have been producing reliable results. Imposing a structure takes away some of the advantages of OSINT. That's not to say that groups like Bellingcat don't also do good work, of course.

To me, OSINT seems like a crowded field due to the number of people who do it as a hobby. So I doubt that the marginal person makes much difference. But since I haven't seriously tried to do it, I'm not sure. 

Why EA groups should not use “Effective Altruism” in their name.

There is a lot of guesswork involved here. How much would it cost for someone, like the CEA, to run a survey to find out how popular perception differs depending on these kinds of names? It would be useful to many of us who are considering branding for EA projects. 

Super-exponential growth implies that accelerating growth is unimportant in the long run

Updates to this: 

Nordhaus paper argues that we don't appear to be approaching a singularity. Haven't read it. Would like to see someone find the crux of the differences with Roodman.

Blog 'Outside View' with some counterarguments to my view:

Thus, the challenge of building long term historical GDP data means we should be quite skeptical about turning around and using that data to predict future growth trends. All we're really doing is extrapolating the backwards estimates of some economists forwards. The error bars will be very large.

Well, Roodman tests for this in his paper, see 5.2, and finds that systematic moderate overestimation or underestimation only changes the expected explosion date by +/- 4 years.

I guess things could change more if  the older values are systematically misestimated differently from more recent values? If very old estimates are all underestimates but recent estimates are not, then that could delay the projection further. Also, maybe he should test for more extreme magnitudes of misestimation. But based on the minor extent to which his other tests changed the results, I doubt this one would make much difference either.

But if it's possible, or even intuitive, that specific institutions fundamentally changed how economic growth occurred in the past, then it may be a mistake to model global productivity as a continuous system dating back thousands of years. In fact, if you took a look at population growth, a data set that is also long-lived and grows at a high rate, the growth rate fundamentally changed over time. Given the magnitude of systemic economic changes of the past few centuries, modeling the global economy as continuous from 10,000 BCE to now may not give us good predictions. The outside view becomes less useful at this distance.

Fair, but at the same time, this undercuts the argument that we should prioritize economic growth as something that will yield social dividends indefinitely into the future. If our society has fundamentally transformed so that marginal economic growth in 1000 BC makes little difference to our lives, then it seems likely that marginal economic growth today will make little difference to our descendants in 2500 AD.

It's possible that we've undergone discontinuous shifts in the past but will not in the future. Just seems unlikely.

Objectives of longtermist policy making

I'm skeptical of this framework because in reality part 2 seems optional - we don't need to reshape the political system to be more longtermist in order to make progress. For instance, those Open Phil recommendations like land use reform can be promoted thru conventional forms of lobbying and coalition building.

In fact, a vibrant and policy-engaged EA community that focuses on understandable short and medium term problems can itself become a fairly effective long-run institution, thus reducing the needs in part 1.

Additionally, while substantively defining a good society for the future may be difficult, we also have the option of defining it procedurally. The simplest example is that we can promote things like democracy or other mechanisms which tend to produce good outcomes. Or we can increase levels of compassion and rationality so that the architects of future societies will act better. This is sort of what you describe in part 2, but I'd emphasize that we can make political institutions which are generically better rather than specifically making them more longtermist.

This is not to say that anything in this post is a bad idea, just that there are more options for meeting longtermist goals.

Load More