All of kbog's Comments + Replies

Critique of OpenPhil's macroeconomic policy advocacy

I don't quite understand what your view is in your section on macro advocacy and in particular what you think is the relevance of that Weyl quote.

To be clear, I think this episode really shouldn't be taken as a lesson against technocracy. The technocrats were on the right side of this one - sure the Fed was too loose in '21 but if it had been controlled by politicians it probably would have been even worse. The size of the stimulus was also a textbook expression of populism.

Of course you could also argue that Fed tightness prior to 2021 was a failure of t... (read more)

2Hauke Hillebrandt3mo
Sorry for being unclear—the main point was about that blind spots are excessively techno (not necessarily technocratic but technophilanthropic), a la Autor's China Shock literature [], where technocrats overemphasized 'gains from trade' (reform), which everyone benefits from because we have slightly lower consumer prices on average, but the lowest income decile lost a lot and then you got populist backlash and Trump. Analogously, OpenPhil and Econtwitter overemphasized 'we're irrationally afraid of higher inflation, unemployment is really bad', let's change central bank policy. In contrast to the above, this might have benefited the lowest income decile (at least till 2021) and was well-intentioned, but it was still very top down, and theory-driven, with very few feedback loops. We might see unintended consequences like Democrats losing elections, because 150m Americans have lower wages now (and perhaps unrest in poorer countries?). But generally, the distinction doesn't matters here, as civil society and philanthropy are part of the policy-making ecosystem, and there's no principled argument that they shouldn't be held to the same utilitarian standard as policy-makers and everyone else. Especially if they affect such large levers—there's nothing sacred about philanthropic vs. government dollars. Anything else would be deontological libertarianism. I took out Weyl's name because several reviewers said people would be triggered by it. Maybe I should have reworded it and taken out the citation to avoid ad hominems and have the point stands on its own.
Why I'm concerned about Giving Green

As Giving Green is still recommending donations to TSM in spite of what seems to be the majority opinion here, I'd like to highlight a recent letter to the White House cosigned by TSM (among dozens of other groups). The letter argues that the United States should be less "antagonistic" towards China in order to focus on cooperating on climate change.

In reality, the United States and China have already agreed to cooperate on climate change. So TSM et al are not proposing any obvious change in US-China climate policy. Apparently they want us to be more gene... (read more)

Giving Green no longer [] recommends TSM, although the reasons prompting the withdrawal of the recommendation appear to be unrelated to the incidents described above:
On AI Weapons

Sorry, I worded that poorly - my point was the lack of comprehensive weighing of pros and cons, as opposed to analyzing just 1 or 2 particular problems (e.g. swarm terrorism risk).

How well did EA-funded biorisk organisations do on Covid?

Hm, certainly the vaccine rollout was in hindsight the second most important thing after success or failure at initial lockdown and containment.

It does seem to have been neglected by preparation efforts and EA funding before the pandemic, but that's understandable considering how much of a surprise this mRNA stuff was.

I think research into novel vaccine platforms like mRNA is a top priority. It's neglected in the sense that way more resources should be going into it but also my impression[1] is that the USG does make up a decent proportion of funding for early stage research into that kind of thing. So that's a sense in which the U.S.'s preparedness was prob good relative to other countries though not in an absolute sense. Here's an article I skimmed about the importance of govt (mostly NIH) funding for the development of mRNA vaccines. [] Fwiw, I think it's prob not the case that the mRNA stuff was that much of a surprise. This 2018 CHS report had self-amplifying mRNA vaccines as one of ~15 technologies to address GCBRs. [1] Though I'm rusty since I haven't worked directly on biorisk for five years and was never an expert.
How well did EA-funded biorisk organisations do on Covid?
Prevention definitely helps. (It is a semantic question if you want to count prevention as a type of preparation or not)

I don't think most people would consider prevention a type of preparation. EA-funded biorisk efforts presumably did not consider it that way. And more to the point, I do not want to lump prevention together with preparation because I am making an argument about preparation that is separate from prevention. So it's not about just semantics, but precision on which efforts did well or poorly.

The idea that preparation (henceforth e
... (read more)
I think it actually is common to include prevention under the umbrella of pandemic preparedness. for example, here's the Council on Foreign Relation's independent committee on Improving Pandemic Preparedness [] : "Based on the painful lessons of the current pandemic, the Task Force makes recommendations for improving U.S. and global capacities to deliver each of the three fundamentals of pandemic preparedness: prevention, detection, and response. " Another example: [] So it might be helpful to specify what you're referring to by preparation.
How well did EA-funded biorisk organisations do on Covid?

I moved my comment to an answer after learning that the index was directly funded by an Open Phil grant. You'd do better to repost your reply to me there. Sorry about the confusion.

How well did EA-funded biorisk organisations do on Covid?
Answer by kbogJun 09, 20215

The Global Health Security Index looks like a misfire. This isn't directly about performance during the pandemic, but Nuclear Threat Initiative, funded by Open Phil for this purpose (h/t HowieL for pointing this out) and collaborating with the Johns Hopkins Center for Health Security, made the 2019 Global Health Security Index which seems invalidated by COVID-19 outcomes and may have encouraged actors to take the wrong moves. This ThinkGlobalHealth article describes how its ratings did not predict good performance against the virus. The article relies... (read more)

It seems fair to call avoiding travel restrictions a dubious measure in hindsight, but circa 2019 it strikes me as a reasonable metric to put under "compliance with international norms". There was an expert consensus that travel norms weren't a good pandemic response tool (see my other comment [] ) and not implementing them is indeed part of complying with the WHO IHRs. I am not totally sure that compliance with international norms a good measure of national health security! However, the according to the Think Global Health article [] you linked on Twitter, even the WHO Joint External Evaluations weren't well-correlated with COVID-19 deaths. (Those evaluations are how the prevention / detection / response capacity are measured in the Global Health Security Index, which then adds measures on health system / compliance with norms / risk landscape.)
Hello, Thank you for the interesting thoughts. The comments on the GHS index are useful and insightful. Your analysis of COVID preparation on Twitter is really really interesting. Well done for doing that. I have not yet looked at your analysis spreadsheet but will try to do that soon. To touch on a point you said about preparation, I think we can take a bit more of a nuanced approach to think about when preparation works rather than just saying "effective pandemic response is not about preparation". Some thoughts from me on this (not just focused on pandemics). * Prevention definitely helps. (It is a semantic question if you want to count prevention as a type of preparation or not). The world is awash with very clear examples of disaster prevention whether it is engineering safe bridges, or flood prevention [] , or nuclear safety, or preventing pathogens escaping labs, etc. * The idea that preparation (henceforth excluding prevention) helps is conventional wisdom and I think I would want to see good evidence against this to stop believing in this. * Obviously preparation helps in the small cases, talk to a paramedic rushing to treat someone or a fireman. I have not looked into it but I get the impression that it helps in the medium cases, eg rapid response teams responding to terror attacks in the UK / France seem useful, although not an expert. On pandemics specifically the quick containment of SARs seems to be a success story (although I have not looked at how much preparation played a role it does seem to be a part of the story). There are not that many extreme COVID-level cases to look at, but it would be odd if it didn’t help in extreme cases too. * The specific wording of the claim in the linked article headline feels clickbait-y. When you actually read the article it actu

"effective pandemic response is not about preparation"

FYI - my impression is that pandemic preparedness is often defined broadly enough to include things like research into defensive technology (e.g. mRNA vaccines). It does seem like those investments were important for the response.

Which non-EA-funded organisations did well on Covid?
Answer by kbogJun 08, 2021-2

EAs have voted in various elections in the United States. This study adjusted for various factors and found that Republican Party power at the state level was associated with modestly higher amounts of death from COVID-19. Since the majority of EA voters have picked the Democratic Party, this can be taken as something of a vindication. Of course, there are many other issues for deciding your vote besides pandemics, and that study might be wrong. It's not even peer reviewed.

The difference might be entirely explained by politically motivated difference... (read more)

Thank you :-)
How well did EA-funded biorisk organisations do on Covid?

Edit: I've reposted this comment as an answer, and am self-downvoting this.

[This comment is no longer endorsed by its author]Reply
[Edit – moved comment to answer above at suggestion of kbog]
Note that Open Phil funded this project.
Help me find the crux between EA/XR and Progress Studies

OK, sorry for misunderstanding.

I make an argument here that marginal long run growth is dramatically less important than marginal x-risk. I'm not fully confident in it. But the crux could be what I highlight - whether society is on an endless track of exponential growth, or on the cusp of a fantastical but fundamentally limited successor stage. Put more precisely, the crux of the importance of x-risk is how good the future will be, whereas the crux of the importance of progress is whether differential growth today will mean much for the far future.

I w... (read more)

Help me find the crux between EA/XR and Progress Studies

"EA/XR" is a rather confusing term. Which do you want to talk about, EA or x-risk studies?

It is a mistake to consider EA and progress studies as equivalent or mutually exclusive. Progress studies is strictly an academic discipline. EA involves building a movement and making sacrifices for the sake of others. And progress studies can be a part of that, like x-risk.

Some people in EA who focus on x-risk may have differences of opinion with those in the field of progress studies.

First, PS is almost anything but an academic discipline (even though that's the context in which it was originally proposed). The term is a bit of a misnomer; I think more in terms of there being (right now) a progress community/movement. I agree these things aren't mutually exclusive, but there seems to be a tension or difference of opinion (or at least difference of emphasis/priority) between folks in the “progress studies” community, and those in the “longtermist EA” camp who worry about x-risk (sorry if I'm not using the terms with perfect precision). That's what I'm getting at and trying to understand.
Defusing the mitigation obstruction argument against geoengineering and carbon dioxide removal

I think I don't really buy your conceptual logic as the mitigation obstruction argument is about the degree to which particular solutions will be over or underestimated relative to their actual value, not about how absolutely good/cheap/fast/etc they are. When considered through that lens, it's not clear (at least to me) what to make of distinctions between big actions and small actions or easy actions and hard actions.

Geoengineering is cheap but Halstead argues that it's not such a bargain as was suggested by earlier estimates.

Defusing the mitigation obstruction argument against geoengineering and carbon dioxide removal
I fear that we need to do Geoengineering right away or we will be locked into never undoing the warming. Problem is a few countries like russia massively benefit from warming and once they see that warming and then take advantage of the newly opened land they will see any attempt to artificially lower temps as an attack they will respond to with force and they have enough fossil fuels to maintain the warm temps even if everyone else stops carbon emissions (which they can easily scuttle).

Deleted my previous comment - I have some little doubts and don't think the international system will totally fail but some problems along these lines seem plausible to me

Defusing the mitigation obstruction argument against geoengineering and carbon dioxide removal

I'm not sure if immediacy of the problem really would lead to a better response: maybe it would lead to a shift from prevention to adaptation, from innovation to degrowth, and from international cooperation to ecofascism. Immediacy could clarify who will be the minority of winners from global warming, whereas distance makes it easier to say that we are all in this together.

At the very least, geoengineering does make the future more complicated, in that on top of the traditional combination of atmospheric uncertainties and emission uncertainties, we ha... (read more)

Defusing the mitigation obstruction argument against geoengineering and carbon dioxide removal

Hm, I suppose I don't have reason to be confident here. But as I understand it:

Stratospheric aerosol injection removes a certain wattage of solar radiation per square meter.

The additional greenhouse effect from human emissions only constitutes a tiny part of our overall temperature balance, shifting us from 289 K to 291 K for instance. SAI cuts nearly the entire energy input from the Sun (excepting that which is absorbed above the stratosphere). So maybe SAI could be slightly more effective in terms of watts per square meter or CO2 tonnes offset under a high-emissions scenario, but it will be a very small difference.

Would like to see an expert chime in here.

On AI Weapons

Hi Tommaso,

If I think about the poor record the International Criminal Court has of bringing war criminals to justice, and the fact that the use of cluster bombs in Laos or Agent Orange in Vietnam did not lead to major trials, I am skeptical on whether someone would be hold accountable for crimes committed by LAWs. 

But the issue here is whether responsibility and accountability is handled worse with LAWs as compared with normal killing. You need a reason to be more skeptical for crimes committed by LAWs than you are for crimes not committed by LAWs. T... (read more)

Why EA groups should not use “Effective Altruism” in their name.

But the answers to a survey like that wouldn't be easy interpret. We should give the same message under organization names to group A and group B and see which group is then more likely to endorse the EA movement or commit to taking a concrete altruistic action.

Yes, that might lead to better data, but it also requires more time to set up and a larger sample size. I'd leave it up to whoever does this to decide how much time they want to invest and which method to choose.
Objectives of longtermist policy making

No I agree on 2!  I'm just saying even from a longtermist perspective, it may not be as important and tractable as improving institutions in orthogonal ways.

Objectives of longtermist policy making

I think it's really not clear that reforming institutions to be more longtermist has an outsized long run impact compared to many other axes of institutional reform.

We know what constitutes good outcomes in the short run, so if we can design institutions to produce better short run outcomes, that will be beneficial in the long run insofar as those institutions endure into the long run. Institutional changes are inherently long-run.

The part of the article that you are referring to is in part inspired by John and MacAskills paper “longtermist institutional reform”, where they propose reforms that are built to tackle political short-termism. The case for this relies on two assumptions: 1. Long term consequences have an outsized moral importance, despite the uncertainty of long-term effects. 2. Because of this, political decision making should be designed to optimize for longterm outcomes. Greaves and MacAskill have written a paper [] arguing for assumption 1: "Because of the vast number of expected people in the future, it is quite plausible that for options that are appropriately chosen from a sufficiently large choice set, effects on the very long future dominate ex ante evaluations, even after taking into account the fact that further-future effects tend to be the most uncertain…“. We seem to agree on this assumption, but disagree on assumption 2. If I understand your argument against assumption 2, it assumes that there are no tradeoffs between optimizing for short-run outcomes and long-run outcomes. This assumption seems clearly false to us, and is implied to be false in “Longtermist institutional reform”. Consider fiscal policies for example: In the short run it could be beneficial to take all the savings in pension funds and spend them to boost the economy, but in the long run this is predictably harmful because many people will not afford to retire.
A love letter to civilian OSINT, and possibilities as a tool in EA

I saw OSINT results frequently during the Second Karabkh War (October 2020). The OSINT evidence of war crimes from that conflict has been adequately recognized and you can find info on that elsewhere. Beyond that, it seems to me that certain things would have gone better if certain locals had been more aware of what OSINT was revealing about the military status of the conflict, as a substitute for government claims and as a supplement to local RUMINT (rumor intelligence). False or uncertain perceptions about the state of a war can be deadly. But there is a... (read more)

Why EA groups should not use “Effective Altruism” in their name.

There is a lot of guesswork involved here. How much would it cost for someone, like the CEA, to run a survey to find out how popular perception differs depending on these kinds of names? It would be useful to many of us who are considering branding for EA projects. 

I'd guess people's perception depends a lot on the culture, so it might make sense to do surveys just for your university, your city or your country, but not globally. Such surveys are easy to do via polls in Facebook groups. Just select a group that resembles your target audience (e.g. scholarship networks etc) and do a quick poll on "We're thinking of starting a new organisation. What do you associate with these names?"
Super-exponential growth implies that accelerating growth is unimportant in the long run

Updates to this: 

Nordhaus paper argues that we don't appear to be approaching a singularity. Haven't read it. Would like to see someone find the crux of the differences with Roodman.

Blog 'Outside View' with some counterarguments to my view:

Thus, the challenge of building long term historical GDP data means we should be quite skeptical about turning around and using that data to predict future growth trends. All we're really doing is extrapolating the backwards estimates of some economists forwards. The error bars will be very large.

Well, Roodman tests... (read more)

Objectives of longtermist policy making

I'm skeptical of this framework because in reality part 2 seems optional - we don't need to reshape the political system to be more longtermist in order to make progress. For instance, those Open Phil recommendations like land use reform can be promoted thru conventional forms of lobbying and coalition building.

In fact, a vibrant and policy-engaged EA community that focuses on understandable short and medium term problems can itself become a fairly effective long-run institution, thus reducing the needs in part 1.

Additionally, while substantively defining ... (read more)

Thank you for your feedback kbog. First, we certainly agree that there are other options that have a limited influence on the future, however, for this article we wanted to only cover areas with a potential for outsized impact on the future. That is the reason we have confined ourselves to so few categories. Second, there may be categories of interventions that are not addressed in our framework that are as important for improving the future as the interventions we list. If so, we welcome discussion on this topic, and hope that the framework can encourage productive discussion to identify such “intervention X”’s. Third, I'm a bit confused about how we would focus on “processes that produce good outcomes” without first defining what we mean with good outcomes, and how to measure them? Fourth, your point on taking the “individual more in focus” by emphasizing rationality and altruism improvement is a great suggestion. Admittedly, this may indeed be a potential lever to improve the future that we haven't sufficiently covered in our post as we were mostly concerned with improving institutions. Lastly, as for improving political institutions more broadly, see our part on progress.
Religious Texts and EA: What Can We Learn and What Can We Inform?

These lectures on historical analysis of the New Testament are neat and might be of interest to you. They give good context for understanding the contemporaneous interpretation of scripture.

EA and the Possible Decline of the US: Very Rough Thoughts

The issue with these interventions suggested for preventing collapse is that they generally have much more pressing impacts besides this. For instance, of course approval voting is great, but its impacts on other political issues (both ordinary political problems, and other tail scenarios like dictatorship) are much more significant. More generally, stuff that makes America politically healthier reduces the probability that it will collapse, and the converse is almost always true. So not only is the collapse possibility relatively unimportant, it's mostly ... (read more)

I'm not sure this is true, though, if I'm right that certain Collapse scenarios are GCRs, because they could cause or entail great-power war or nuclear war.
Why I'm concerned about Giving Green

There are more problems with The Sunrise Movement (TSM) which don't seem to have been raised yet in this discussion.

... (read more)
Wow, thank you! I especially appreciate the handbook, it expresses a lot of my thoughts much better than I could have. It also made me realise that I didn't express the point you make in the very first section although it's kind of critical to my feeling that there's so much opportunity here - ie that politics is sort of unique in that it calls for mass engagement, and there are so many opportunities to be involved just as a citizen (or group of citizens) without necessarily making it your profession or becoming some kind of expert. Which is not generally often true in other spheres (eg charity) in my opinion.
Two Nice Experiments on Democracy and Altruism

the environmental success of democracies relative to autocracies

I want to read this but the link doesn't work

Thanks for pointing this out. It should work now.
[Crosspost] Relativistic Colonization

If it is to gather resources en route, it must accelerate those resources to its own speed. Or alternatively, it must slow down to a halt, pick up resources and then continue. This requires a huge expenditure of energy, which will slow down the probe.

Bussard ramjets might be viable. But I'm skeptical that it could be faster than the propulsion ideas in the Sandberg/Armstrong paper. Anyway you seem to be talking about spacecraft that will consuming planets, not Bussard ramjets.

Going from 0.99c to 0.999c requires an extraordinary amount of additional energy ... (read more)

1. It's true that making use of resources while matching the probe's speed requires a huge expenditure of energy, by the transformation law of energy-momentum if for no other reason. If the remaining energy is insufficient then the probe won't be able to go any faster. Even if there's no more efficient way to extract resources than full deceleration/re-acceleration I expect this could be done infrequently enough that the probe still maintains an average speed of >0.9c. In that case the main competitive pressure among probes would be minimizing the number of stop-overs. 2. The highest speed considered in the Armstrong/Sanders paper is 0.99c, which is high enough for my qualitative picture to be relevant. Re-skimming the paper, I don't see an explicitly stated reason why the limit it there, although I note that any higher speed won't affect their conclusion about the Fermi paradox and potential past colonizer visible from Earth. The most significant technological reasons for this limit I see them address are the energy costs of deceleration and damage from collisions with dust particles, and neither seems to entirely exclude faster speeds. 3. Yes, at such high speeds optimizing lateral motion becomes very important and the locations of concentrated sources of energy can affect the geometry of the expansion frontier. For a typical target I'm not sure if the optimal route would involve swerving to a star or galaxy or whether the interstellar dust and dark matter in the direct path would be sufficient. For any particular route I expect a probe to compete with other probes taking a similar route so there will still be competitive pressure to optimize speed over 0.99c if technologically feasible. 4. A lot of what I'm saying remains the same if the maximal technologically achievable speed is subrelativistic. In other ways such a picture would be different, and in particular th
Are we living at the most influential time in history?

I think this argument implicitly assumes a moral objectivist point of view.

I'd say that most people in history have been a lot closer to the hinge of history when you recognize that the HoH depends on someone's values.

If you were a hunter-gatherer living in 20,000 BC then you cared about raising your family and building your weir and you lived at the hinge of history for that.

If you were a philosopher living in 400 BC then you cared about the intellectual progress of the Western world and you lived at the hinge of history for that.

If you were a theologian ... (read more)

Big List of Cause Candidates

Thanks for the comments. Let me clarify about the terminology. What I mean is that there are two kinds of "pulling the rope harder". As I argue here:

The appropriate mindset for political engagement is described in the book Politics Is for Power, which is summarized in this podcast. We need to move past political hobbyism and make real change. Don’t spend so much time reading and sharing things online, following the news and fomenting outrage as a pastime. Prioritize the acquisition of power over clever dunking and purity politics. See yourself as an inside

... (read more)
Fair enough; I've changed this to "Ideological politics" pending further changes.
Big List of Cause Candidates

You could add this post of mine to space colonization: An Informal Review of Space Exploration - EA Forum (

I think the 'existential risks' category is too broad and some of the things included are dubious. Recommender systems as existential risk? Autonomous weapons? Ideological engineering? 

Finally, I think the categorization of political issues should be heavily reworked, for various reasons. This kind of categorization is much more interpretable and sensible:

... (read more)
I agree that the categorization scheme for politics isn't that great. But I also think that there is an important different between "pulling one side of the rope harder" (currently under "culture war", say, putting more resources into the US Senate races in Georgia [] ) and "pulling the rope sideways", say Getting money out of politics and into charity [] [^1]. Note that a categorization scheme which distinguishes between the two doesn't have to take a position on their value. But I do want the categorization scheme to distinguish between the two clusters because I later want to be able to argue that one of them is ~worthless, or at least very unpromising. Simultaneously, I think that other political endeavors have been tainted by association to more "pulling the rope harder" kind of political proposals, and making the distinction explicitly makes it more apparent that other kinds of political interventions might be very promising. Your proposed categorization seems to me to have the potential to obfuscate the difference between topics which are heavily politicized among US partisan lines, and those which are not. For example, I don't like putting electoral reform (i.e., using more approval voting, which would benefit candidates near the center with broad appeal) and statehood for Puerto Rico (which would favor Democrats) in the same category. I'll think a little bit about how and whether to distinguish between raw categorization schemes (which should presumably be "neutral") and judgment values or discussions (which should presumably be separate). One option would be to have, say, a neutral third party (e.g. Aaron Gertler) choose the categorization scheme. Lastly, I wanted to say that although it seems we have strong differences of opinion on this
1. Added the Space Exploration Review. Great post, btw, of the kind I'd like to see more of for other speculative or early stage cause candidates. 2. I agree that the existential risks category is too broad, and that I was probably conflating it with dangers from technological development. Will disambiguate
The case for delaying solar geoengineering research

I don't think the pernicious mitigation obstruction argument is sound. It would be equally plausible for just about any other method of addressing air pollution. For instance, if we develop better solar power, that will reduce the incentive for countries and other actors to work harder at implementing wind power, carbon capture, carbon taxes, tree planting, and geoengineering. All climate solutions substitute for each other to the extent that they are perceived as effective. But we can't reject all climate solutions for fear that they will discourage other... (read more)

American policy platform for total welfare

My main point: By not putting "EA" into the name of your project, you get free option value: If you do great, you can still always associate with EA more strongly at a later stage; if you do poorly, you have avoided causing any problems for EA. 

I've already done this. I have shared much of this content for over a year without having this name and website. My impression was that it didn't do great nor did it do poorly (except among EAs, who have been mostly positive). One of the problems was that some people seemed confused and suspicious because they ... (read more)

American policy platform for total welfare

I think there are countervailing reasons in favor of doing so publicly, described here

Additionally, prominent EA organizations and individuals have already displayed enough politically contentious behavior that a lot of people already perceive EA in certain political ways. Restricting politically contentious public EA behavior to those few  orgs and individuals maximizes the problems of 1) and 2) whereas having a wider variety of public EA points of view mitigates them. I'd use a different branding if I were less convinced that politically engaged audiences already perceive EA as having political aspects.

(As always, personal opinion, not my employer's.)

While I agree that it could be good for EAs to become more politically active, I don't think there are good arguments for an EA branding.

My main point: By not putting "EA" into the name of your project, you get free option value: If you do great, you can still always associate with EA more strongly at a later stage; if you do poorly, you have avoided causing any problems for EA. By choosing an EA branding for your project, you selectively increase the downside risk, but not the upside/benefits.

Quoting from t... (read more)

EA politics mini-survey results

The Civic Handbook presents a more simplified view on the issue that sticks to making the least controversial claims that nearly all EAs should be able to get on board with. My full justification for why I believe we should maintain the defense budget, written earlier this year, is here: 

Taking Self-Determination Seriously

I will think more about Brexit (noting that the EU is a supranational organization not a nation-state) but keep in mind that under the principle of self-determination, Scotland, which now would likely prefer to leave the UK and stay in the EU, should be allowed to do so.

Why those who care about catastrophic and existential risk should care about autonomous weapons

welcome any evidence you have on these points, but your scenario seems to a) assume limited offensive capability development, b) willingness and ability to implement layers of defensive measures at all “soft” targets, c) focus only on drones, not many other possible lethal AWSs, and d) still produces considerable amount of cost--both in countermeasures and in psychological costs--that would seem to suggest a steep price to be paid to have lethal AWSs even in a rosy scenario.

I'm saying there are substantial constraints on using cheap drones to attack civili... (read more)

The Case for Space: A Longtermist Alternative to Existential Threat Reduction

You may like to see this post, I agree in theory but don't think that space programs currently are very good at accelerating long run colonization. 

Why those who care about catastrophic and existential risk should care about autonomous weapons

Lethal autonomous weapons systems are an early test for AGI safety, arms race avoidance, value alignment, and governance

OK, so this makes sense and in my writeup I argued a similar thing from the point of view of software development. But it means that banning AWSs altogether would be harmful, as it would involve sacrificing this opportunity. We don't want to lay the groundwork for a ban on AGI, we want to lay the groundwork for safe, responsible development. What you actually suggest, contra some other advocates, is to prohibit certain classes but not oth... (read more)

Thanks for your replies here, and for your earlier longer posts that were helpful in understanding the skeptical side of the argument, even if I only saw them after writing my piece. As replies to some of your points above: It is unclear to me what you suggest we would be “sacrificing" if militaries did not have the legal opportunity to use lethal AWS. The opportunity I see is to make decisions, in a globally coordinated way and amongst potentially adversarial powers, about acceptable and unacceptable delegations of human decisions to machines, and enforcing those decisions. I can’t see how success in doing so would sacrifice the opportunity. Moreover, a ban on all autonomous weapons (including purely defensive nonlethal ones) is very unlikely and not really what anyone is calling for, so there will be plenty of opportunity to “practice” on non-lethal AWs, defenses against AWs, etc., on the technical front; there will also be other opportunities to "practice" on what life-and-death decisions should. and should not be delegated, for example in judicial review. Though I understand why you have drawn a connection to the Ottawa Treaty because of its treatment on landmines, I believe this is the wrong analogy for AWSs. I believe the Biological Weapons Convention is more apt, and I think the answer would be "yes," we have learned something about international governance and coordination for dangerous technology from the BWC. I also believe that the agreement not to use landmines is a global good. I am not sure why you are confident it would be easier to reach binding agreements on these suggested matters. To the extent that it is possible, it may suggest that there is little value to be gained. What is generally missing from these is that there is little popular or political will to create an international agreement on e.g. internet connectivity. It’s not as high stakes or consequential as lethal AWSs, and to first approximation, nobody cares. The point is to show agre
Avoiding Munich's Mistakes: Advice for CEA and Local Groups

I don't have any arguments over cancel culture or anything general like that, but I am a bit bothered by a view that you and others seem to have. I  don't consider Robin Hanson an "intellectual ally" of the EA movement; I've never seen him publicly praise it or make public donation decisions, but he has claimed that do-gooding is controlling and dangerous, that altruism is all signaling with selfish motivations, that we should just save our money and wait for some unspecified future date to give it away, and that poor faraway people are less likely to... (read more)

EA's abstract moral epistemology
Answer by kbogOct 22, 202013

The idea that she and some other nonconsequentialist philosophers have is that if you care less about faraway people's preferences and welfare, and care more about stuff like moral intuitions, "critical race theory" and "Marxian social theory" (her words), then it's less abstract. But as you can see here, they're still doing complicated ivory tower philosophy that ordinary people do not pick up. So it's a rather particular definition of the term 'abstract'. 

Let's be clear: you do not have to have abstract moral epistemology to be an EA. You can ignore... (read more)

New and improved Candidate Scoring System

Thank you for your interest. So, I'm moving everything to my website now. Previously I had stabbed at a few House and Senate races, but now that the primaries are over, there's really no point in that - I'm instead working on a general comparison of Republicans vs Democrats, and the conclusion will almost certainly be a straightforward recommendation to vote D for all or nearly all congressional races. 

If people are curious about which races they should help with donations, I think it's generally fine to focus on key pro-Dem opportunities like this an... (read more)

Tax Havens and the case for Tax Justice

There a problem with your importance metric - the importance of malaria funding should be measured not by how much it costs but by how much good it does. $1B of malaria funding is much more important than $1B of , right? If we imagine that all the raised revenue gets used for fighting malaria, then it makes sense, but of course that is not a realistic assumption. 

I think that raising tax revenue for the US (and maybe some other countries) is not as important as it seems at first glance due to our flexibility with the Federal Reserve and record low int... (read more)

Super-exponential growth implies that accelerating growth is unimportant in the long run

I'm pretty confident that accelerating exponential and never-ending growth would be competitive with reducing x-risk. That was IMO the big flaw with Bostrom's argument (until now). If that's not intuitive let me know and I'll formalize a bit

Load More