All of Gentzel's Comments + Replies

This is a really good summary. I think the main area I have remaining uncertainty about these types of arguments is around if tech decisively favors finders or not. I believe there's been a lot of analysis that implies that modern nuclear submarines are essentially not feasible to track and destroy, and there are others that argue AI-enabled modeling and simulation could be applied as a "fog of war" machine to simulate an opponent's sensor system and optimize concealment of nuclear forces. 

Nevertheless, without more detail these sorts of  counter... (read more)

On flash-war risks, I think a key variable is what the actual forcing function is on decision speed and the key outcome you care about is the decision quality.

Fights where escalation is more constrained by decision making speed than weapon speed are where we should expect flash war dynamics. These could include: short-range conflicts, cyber wars, the use of directed energy weapons, influence operations/propaganda battles, etc.

For nuclear conflict, unless some country gets extremely good at stealth, strategic deception, and synchronized mass incapacitation/... (read more)

6
christian.r
2y
Thanks for engaging so closely with the report! I really appreciate this comment. Agreed on the weapon speed vs. decision speed distinction — the physical limits to the speed of war are real. I do think, however, that flash wars can make non-flash wars more likely (eg cyber flash war unintentionally intrudes on NC3 system components, that gets misinterpreted as preparation for a first strike, etc.). I should have probably spelled that out more clearly in the report. I think we actually agree on the broader point — it is possible to leverage autonomous systems and AI to make the world safer, to lengthen decision-making windows, to make early warning and decision-support systems more reliable. But I don’t think that’s a given. It depends on good choices. The key questions for us are therefore: How do we shape the future adoption of these systems to make sure that’s the world we’re in? How can we trust that our adversaries are doing the same thing? How can we make sure that our confidence in some of these systems is well-calibrated to their capabilities? That’s partly why a ban probably isn’t the right framing. I also think this exchange illustrates why we need more research on the strategic stability questions. Thanks again for the comment!

The cluster munitions divestment example seems plausibly somewhat more successful in the West, but not elsewhere (e.g. the companies that remain on the "Hall of Shame" list). I'd expect something similar here if the pressure against LAWs were narrow (e.g. against particular types with low strategic value). Decreased demand does seem more relevant than decreased investment though.

If LAWs are stigmatized entirely, and countries like the U.S. don't see a way to tech their way out to sustain advantage, then you might not get the same degree of influence in the... (read more)

Given the time it takes to form relationships with nodes in decision making networks, and the difficulty of reducing uncertainty from the outside, at some point it makes sense to either aim people at such jobs or to make friends with people in them. That lobbying and working in government aren't unique tactics or roles in society doesn't matter if they are neglected by those who are capable of pursuing similar goals: different organizations compete for influence in different directions. Early investment to enable direct interaction with decision ... (read more)

I am not sure we should focus more into this area, I just want to make sure that in general, people who go into policy or advocacy don't propagate bad ideas, or discredit EA with important people who would otherwise be aligned with our goals in the future.

I do think that knowing the history of transformative technologies (and policies that effected how they were deployed) will have a lot of instrumental value for EAs trying to make good decisions about things like gene editing and AI.

You seem to be missing the part where most people are disagreeing with the post in significant ways.

F-15s and MRAPs still have to be operated by multiple people, which requires incentive alignment between many parties. Some autonomous weapons in the future may be able to self-sustain and repair, (or be a part of a self-sustaining autonomous ecosystem) which would mean that they can be used while being aligned with fewer people's interests.

A man-at arms wouldn't be able to take out a whole town by himself if more than a few peasants coordinate with pitchforks, but depending on how LAWS are developed, a very small group of people could dominate the world. ... (read more)

That is the point.

The reason it is appropriate to call this ethical reaction time, rather than just reaction time is because the focus of planning and optimization is around ethics and future goals. To react quickly with respect to an opportunity that is hard to notice, you have to be looking for it.

Technical reaction time is a better name in some ways, but it implies too narrow of a focus, while just reaction time implies too wide of a focus. There probably is a better name though.

I just added some examples to make it a bit more concrete.

I think you may be misunderstanding what I mean by ethical reaction time, but I may change the post to reduce the confusion. I think adding examples would make the post a lot more valuable.

Basically, what I mean by ethical reaction time is just being able to make ethical choices as circumstances unfold and not be caught unable to act, or acting in a bad way.

Here’s a few examples, some hypothetical, some actual:

Policy:

  • One can imagine the Reach Every Mother and Child Act might have passed last year if a few congressmen were more responsive in adjusting i

... (read more)
2
MichaelDello
7y
For some of the examples, it seems unclear to me how they differ from just reacting quickly generally. In other words, what makes these examples of 'ethical' reactions and not just 'technical' reactions?
0
Gentzel
8y
This is the referenced program: http://www.bia.gov/WhoWeAre/BIA/OIS/HumanServices/HousingImprovementProgram/

Great summary of why I hate when people walk across the road instead of running. Or when people space themselves out instead of clustering so that no cars can get by.

This is my current heuristic, though if we learn unexpected things from feedback I could imagine updating in a different direction:

If positive feedback (successful comment) --> Try to restart project

If really good negative feedback --> Make a better lessons learned post and propose a different type of project

If ambiguous negative feedback --> Recommend people avoid experimenting with this type of policy action and focus on other policy interventions.

In an early version of the sheet we had multiple columns subjectively assessing things like the replaceability of comments, how high impact an influential comment could be, and our sense of how probable influence was. Each person on the team had their own column for ranking importance.

In the current sheet, these were merged together to make a rough prioritization and remove clutter from the sheet for those who help us. That being said, this prioritization did not take into account our current team ability to produce comments, or the fact that easier comments may be good for feedback. This is why we submitted a low importance comment as a feedback test.

For those who are interested, this is our current blog: https://eapolicy.wordpress.com/

We will try to keep it fairly updated.

I think it is most likely we will be backing up good policies that some regulators want. New policies are hard, and a lot of requests for comments come in a sort of binary way: "should we implement policy x.1 or x.2?"

I currently have a google doc that I have been using to record hours, mistakes, lessons learned, and observations. I do think I should write it up as we make a blog.

Writing down problems has seemed to function in a similar way to rubber ducking though, trying to get certain problems into words can sometimes highlight a solution, and that has been useful.

I think we will start blogging in a limited capacity about regulations we are seriously considering working on and some that we considered and then dismissed. We probably aren't going to blog about every regulation we look at since there are so many. Some comments are likely to be far more impactful than others, however the comments that are likely to have the most impact are also likely to have slower feedback and no nearby certain deadlines for implementation.

Our current priority list seems to be: -Network early to get expert feedback and assistance -P... (read more)

As for spreadsheets, we could go through a cluster thinking way of producing estimates, but I am under the impression this would take a lot more time per person, and then when comparing spreadsheets at the end, we'd be finding errors that would have been easier to handle earlier if we worked together earlier and got faster feedback.

There certainly is value to avoiding groupthink though. Overall I do think using multiple sequential techniques could be a rather rigorous way to evaluate something, and make a very good comment, but we are also trying to get us... (read more)

Some of these questions require semi-detailed responses, so I will respond with a few different comments. Richard had some examples/anecdotes about the level of impact policy comments could have:

"A good recent example of FDA making major changes as a result of public comment is the Food Safety Modernization Act (FSMA) Produce Safety rule. This rule was extensively changed as a result of feedback from the public, mostly the affected farmers. Some of the comments were merely self-interested lobbying, but some pointed out where FDA's lack of understandin... (read more)

A friend made the argument to me yesterday that large organizations have high costs to finding such investments, since people may try to scam them or just compete for their funding by exaggerating. As an individual, who already has knowledge of these small scale circumstances, you can spend time and money on such small projects without facing similar risks. This might be a comparative advantage for small donors who are good at evaluating persons working on such projects.

I agree that quality matters, but it does help accountability for progress to be measur... (read more)

1
tomstocker
9y
Yes. But its about how easy something is to treat also? Small minnows definitely have some advantages (small is beautiful has a lot of arguments in this direction) The issues is that we haven't found little charities that are making a difference as well / cheaply as SCI/AMF. I think that actually, scale is very important unless you're doing knowledge based stuff. Economies of scale exist, and are often more important than the kinds of advantages small players have?

If that scale was achieved, I think we would be able to make a political party. When you have a large amount of people trying to be effective, the actions that we consider effective now may be the type that are replaceable in such an environment.

I suspect that such a criticism does apply. I remember a friend criticizing the way the Bill and Melinda Gates Foundation funded charter schools and scholarships as ineffective. You can see some of the grants they have awarded here.

Does anything have impersonal and objective force? I am rather confused as to what you are comparing to that is better. If you are just talking about forcing people to believe things, that doesn't necessarily have anything to do with what it true. If you were just comparing to Rawls, why should I accept Rawls' formulation of the right as being prior or independent from the good? You can use Rawls' veil of ignorance thought experiment to support utilitarianism (1), so I don't see how Rawls can really be a counter objection, or specifically how Rawls' argume... (read more)

These are heuristics for specialized cases. In most cases you can do far more good elsewhere than you can do for your family. The case with Mill is a case where you are developing a child to help many more than you could, the case with parents is likewise a case where you are helping them to help many others via donating more than you could on your own. If we are being Kantian about this, the parents still aren't being used merely as a means because their own happiness matters and is a part of the consideration.

In cases where helping your parents helps onl... (read more)

0[anonymous]9y
Most think that one's reason for action should be one's actual reason for action, rather than a sophistic rationalisation of a pre-given reason. There's no reason to adopt those 'axioms' independent of adopting those axioms; they certainly, as stated, have no impersonal and objective force. Insofar as that reason is mere intuition, which I see no reason for respecting, then clearly your axioms are insufficient with regard to any normal person - indeed, the entire post-Rawlsian establishment of Anglophone political theory is based exactly on the comparatively moving intuit of placing the right prior to the good. "In cases where helping your parents helps only your parents, why not help someone else who you could help more effectively?" That rhetorically begs the question of the evaluative content of help, or that helping persons is of especial value.

Earlier from Peter Hurford:

“I think I recall GiveWell agreeing that some of the Gates Foundation work is higher impact than GiveWell top charities, but it's already exceeded room for more funding (because of the Gates Foundation). Some of the vitamin fortification stuff seems like good examples, though GiveWell has recently recognized some vitamin fortification charities as standout charities.”

1
Giles
9y
Hasn't GiveWell also said that large orgs tend to do so many different things that some end up being effective and others not? Does this criticism apply to the Gates Foundation?

Here are some examples of interesting things the Bill and Melinda Gates Foundation has looked into:

(1) Inexpensive lasers which only target the mosquitoes that can carry malaria

(2) Producing inexpensive meat substitutes that actually taste like meat

(3) Malaria vaccines and education about children’s health

In some cases "special duties" to family can be derived as a heuristic for utilitarianism. As a family member, you probably aren't replaceable, families tend to expect help from their members, and families are predisposed to reciprocate altruism: for many people there is a large chance of high negative utility both to yourself and family if you ignore your family. The consequences to you could be negative enough to make you less effective as an altruist in general.

For example, if you are a college student interested in EA and your parents stop ... (read more)

0[anonymous]9y
This strikes me as a highly wishful and ad hoc adaptation of utilitarianism to pre-given moral dispositions, and personally, as something of a reductio. Are you honestly suggesting the following as an inter-personal or intra-personal justification?: "Taking care of parents when they get older might also seem fairly non-consequentialist, but if there is a large inheritance at stake it could be the case that taking good care of your family is the highest utility thing for you to do." It follows, I suppose, if there is no inheritance at stake, that you should let them rot. How do you justify utilitarianism? I can only hope not via intuitionism.
1
Tom_Ash
9y
True, but I'd assume you'd agree that non-consequentialist who allow for special duties have different and potentially stronger, more overriding reason. Indeed, he had a breakdown which he put down to his upbringing, though I don't know if it was primarily due to the utilitarian aspects of this. If I recall correctly, the (deeply uncharitable) parody of such an upbringing in Dickens' Hard Times was based on Mill.

I have seen such literature, but you can get around some of the looking back bias problems by recording how you feel in the moment (provided you aren't pressured to answer dishonestly). I am sure a lot of people have miserable lives, but I do think that when I believe I have been fairly happy for the past 4 years, it is very unlikely the belief is false (because other people also thought I was happy to).

I do think the concern about accuracy of beliefs about experience warrants finding a better way to evaluate people's happiness in general though. It think... (read more)

0
tylermjohn
9y
Yeah, I think you're all-around right. I'm less sure that my life over the past two years has been very good (my memory doesn't go back much father than that), and I'm very privileged and have a career that I enjoy. But that gives me little if any reason to doubt your own testimony.

Due to my parent's schedules, I once got stuck with my younger brother (11 years old at the time) at a Less Wrong meet up and the party of an EA friend in DC. I felt very awkward trying to keep my brother tame and entertained without detracting from the surrounding conversations, but he did extraordinarily well compared to other social environments. In my experience, people involved in effective altruism seem to be fairly good at handling children, even ones with special needs. That being said, the difference between an infant and an 11 year old is considerable.

I'm very glad you made the bullet point list at the end, it is very useful. It applies fairly well to most age groups for kids.

I like this idea, and have done it before, but it is good if the process can be sped up. Being more responsive increases the likelihood that the useful things you post will get read by those you are responding to. Some forums boot people for not explaining their arguments fast enough.

http://lifehacker.com/utilize-the-steel-man-tactic-to-argue-more-effectivel-1632402742

An advantage to steel-manning an opponent (arguing against the best version of their argument) is that you get to see if they agree with your steel-man. This leads to many possible outcomes, and almost all are good for information within the debate. If the person disagrees with your steel man, they may rephrase their argument in a stronger way than your steel man, which may convince you of their position and cause you to change your mind. If they agree, you know exactly w... (read more)

https://www.facebook.com/groups/296376750542661/

The group above is looking to make a house or 2 in the DC metro area. The first house will likely be organized this year in College Park, Maryland.

If people select efficient enough charities, the benefits might outweigh the damage of deadweight loss, and value destruction via higher taxes. The thing is, this charity tax doesn't seem to guarantee donation matching, it just increases the likelihood hat people will donate to something.

Maybe I am being confused by ambiguity, but the situation I imagined was that the government increases income tax between 1% and 10% and that the pool of money generated by this is given back to people who donate as a tax credit. If I donate $1,000 to AMF, I get $1,000 back from the government: but no guarantee that others will donate to AMF.

During the policy comment project by the UMD effective altruism group, we found that in some government agencies there actually is a degree of cost effectiveness analysis and meritocracy. This leads me to expect that the government will do slightly better than the population at deciding where to give in a more direct manner. The government is less likely to actually go through with something like the ALS ice bucket challenge, but when you have this sort of tax system it seems to me that such things might get economically damaging levels of funding, and that this will discourage future donations and charity in general.

Effective altruism needs to be much more popular for this tax idea to have a chance of being a good thing.