Habryka

Habryka's Comments

Thoughts on The Weapon of Openness
though it's important to note that that technique was developed at IBM, and then given to the NSA, and not developed internally at the NSA.

So I think this is actually a really important point. I think by default the NSA can contract out various tasks to industry professionals and academics and on average get results back from them that are better than what they could have done internally. The differential cryptoanalysis situation is a key example of that. IBM could have instead been contracted by some random other group and developed the technology for them instead, which means that the NSA had basically no lead in cryptography over IBM.

Thoughts on The Weapon of Openness

Even if all of these turn out to be quite significant, that would at most imply a lead of something like 5 years.

The elliptic curve one doesn't strike me at all like the NSA had a big lead. You are probably referring to this backdoor:

https://en.wikipedia.org/wiki/Dual_EC_DRBG

This backdoor was basically immediately identified by security researchers the year it was embedded in the standard. As you can read in the Wikipedia article:

Bruce Schneier concluded shortly after standardization that the "rather obvious" backdoor (along with other deficiencies) would mean that nobody would use Dual_EC_DRBG.

I can't really figure out what you mean by the DES recommended magic numbers. There were some magic numbers in DES that were used for defense against the differential cryptanalysis technique. Which I do agree is probably the single strongest example we have of an NSA lead, though it's important to note that that technique was developed at IBM, and then given to the NSA, and not developed internally at the NSA.

To be clear, a 30 (!) year lead seems absolutely impossible to me. A 3 year broad lead seems maybe plausible to me, with a 10 year lead in some very narrow specific subset of the field that gets relatively little attention (in the same way research groups can sometimes pull ahead in a specific subset of the field that they are investing heavily in).

I have never talked to a security researcher who would consider 30 years remotely plausible. The usual impression that I've gotten from talking to security researchers is that the NSA has some interesting techniques and probably a variety of backdoors, which they primarily installed not by technological advantage but by political maneuvering, but that in overall competence they are probably behind the academic field, and almost certainly not very far ahead.

Thoughts on The Weapon of Openness
past leaks and cases of "catching up" by public researchers that they are roughly 30 years ahead of publicly disclosed cryptography research

I have never heard this and would extremely surprised by this. Like, willing to take a 15:1 bet on this, at least. Probably more.

Do you have a source for this?

How do you feel about the main EA facebook group?

Do you have the same feeling about comments on the EA Forum?

Request for Feedback: Draft of a COI policy for the Long Term Future Fund
Separately, you mentioned OpenPhil's policy of (non-) disclosure as an example to emulate. I strongly disagree with this, for two reasons.

This sounds a bit weird to me, given that the above is erring quite far in the direction of disclosure.

The specific dimension of the OpenPhil policy that I think has strong arguments going for it is to be hesitant with recusals. I really want to continue to be very open with our Conflict of Interest, and wouldn't currently advocate for emulating Open Phil's policy on the disclosure dimension.

Request for Feedback: Draft of a COI policy for the Long Term Future Fund
I didn't see any discussion of recusal because the fund member is employed or receives funds from the potential grantee?

Yes, that should be covered by the CEA fund policy we are extending. Here are the relevant sections:

Own organization: any organization that a team member
is currently employed by
volunteers for
was employed by at any time in the last 12 months
reasonably expects to become employed by in the foreseeable future
does not work for, but that employs a close relative or intimate partner
is on the board of, or otherwise plays a substantially similar advisory role for
has a substantial financial interest in

And:

A team member may not propose a grant to their own organization
A team member must recuse themselves from making decisions on grants to their own organizations (except where they advocate against granting to their own organization)
A team member must recuse themselves from advocating for their own organization if another team member has proposed such a grant
A team member may provide relevant information about their own organization in a neutral way (typically in response to questions from the team’s other members).

Which covers basically that whole space.

Note that that policy is still in draft form and not yet fully approved (and there are still some incomplete sentences in it), so we might want to adjust our policy above depending on changes in the the CEA fund general policy.

Request for Feedback: Draft of a COI policy for the Long Term Future Fund

Responding on a more object-level:

As an obviously extreme analogy, suppose that someone applying for a job decides to include information about their sexual history on their CV.

I think this depends a lot on the exact job, and the nature of the sexual history. If you are a registered sex-offender, and are open about this on your CV, then that will overall make a much better impression than if I find that out from doing independent research later on, since that is information that (depending on the role and the exact context) might be really highly relevant for the job.

Obviously including potentially embarrassing information in a CV without it having much purpose is a bad idea, and mostly signals various forms of social obliviousness, as well as distract from the actually important parts of your CV, which pertain to your professional experience and factors that will likely determine how well you will do at your job.

But I'm inclined to agree with Howie that the extra clarity you get from moving beyond 'high-level' categories probably isn't all that decision-relevant.

So, I do think this is probably where our actual disagreement lies. Of the most concrete conflicts of interest that have given rise to abuses of power I have observed both within the EA community, and in other communities, more than 50% where the result of romantic relationships, and were basically completely unaddressed by the high-level COI policies that the relevant institutions had in place. Most of these are in weird grey-areas of confidentiality, but I would be happy to talk to you about the details of those if you send me a private message.

I think being concrete here is actually highly action relevant, and I've seen the lack of concreteness in company policies have very large and concrete negative consequences for those organizations.

Request for Feedback: Draft of a COI policy for the Long Term Future Fund
less concrete terms is mostly about demonstrating an expected form of professionalism.

Hmm, I think we likely have disagreements on the degree to which I think at least a significant chunk of professionalism norms are the results of individuals trying to limit accountability of themselves and people around them. I generally am not a huge fan of large fractions of professionalism norms (which is not by any means a rejection of all professionalism norms, just specific subsets of it).

I think newspeak is a pretty real thing, and the adoption of language that is broadly designed to obfuscate and limit accountability is a real phenomenon. I think that phenomenon is pretty entangled with professionalism. I agree that there is often an expectation of professionalism, but I would argue that exactly that expectation is what often causes obfuscating language to be adopted. And I think this issue is important enough that just blindly adopting professional norms is quite dangerous and can have very large negative consequences.

Request for Feedback: Draft of a COI policy for the Long Term Future Fund
You could do early screening by unanimous vote against funding specific potential grantees, and, in these cases, no COI statement would have to be written at all.

Since we don't publicize rejections, or even who applied to the fund, I wasn't planning to write any COI statements for rejected applicants. That's a bit sad, since it kind of leaves a significant number of decisions without accountability, but I don't know what else to do.

The natural time for grantees to object to certain information to be included would be when we run our final writeup past them. They could then request that we change our writeup, or ask us to rerun the vote with certain members excluded, which would make the COI statements unnecessary.

Request for Feedback: Draft of a COI policy for the Long Term Future Fund

This is a more general point that shapes my thinking here a bit, not directly responding to your comment.

If somebody clicks on a conflict of interest policy wanting to figure out if they generally trust thee LTF and they see a bunch of stuff about metamours and psychedelics that's going to end up incredibly salient to them and that's not necessarily making them more informed about what they actually cared about. It can actually just be a distraction.

I feel like the thing that is happening here makes me pretty uncomfortable, and I really don't want to further incentivize this kind of assessment of stuff.

A related concept in this space seems to me to be the Copenhagen Interpretation of Ethics:

The Copenhagen Interpretation of quantum mechanics says that you can have a particle spinning clockwise and counterclockwise at the same time – until you look at it, at which point it definitely becomes one or the other. The theory claims that observing reality fundamentally changes it.
The Copenhagen Interpretation of Ethics says that when you observe or interact with a problem in any way, you can be blamed for it. At the very least, you are to blame for not doing more. Even if you don’t make the problem worse, even if you make it slightly better, the ethical burden of the problem falls on you as soon as you observe it. In particular, if you interact with a problem and benefit from it, you are a complete monster. I don’t subscribe to this school of thought, but it seems pretty popular.

I feel like there is a similar thing going on with being concrete about stuff like sexual and romantic relationships (which obviously have massive consequences in large parts of the world). And maybe more broadly having this COI policy in the first place. My sense is that we can successfully avoid a lot of criticism by just not having any COI policy, or having a really high-level and vague one, because any policy we would have would clearly signal we have looked at the problem, and are now to blame for any consequences related to it.

More broadly, I just feel really uncomfortable with having to write all of our documents to make sense on a purely associative level. I as a donor would be really excited to see a COI policy as concrete as the one above, similarly to how all the concrete mistake pages on all the EA org websites make me really excited. I feel like making the policy less concrete trades of getting something right and as such being quite exciting to people like me, in favor of being more broadly palatable to some large group of people, and maybe making a bit fewer enemies. But that feels like it's usually going to be the wrong strategy for a fund like ours, where I am most excited about having a small group of really dedicated donors who are really excited about what we are doing, much more than being very broadly palatable to a large audience, without anyone being particularly excited about it.

Load More