Habryka

Habryka's Comments

Examples for impact of Working at EAorg instead of ETG

This also seems right to me. We roughly try to distribute all the money we have in a given year (with some flexibility between rounds), and aren't planning to hold large reserves. So from just our decisions we couldn't ramp up our grantmaking because better opportunities arise.

However, I can imagine donations to us increasing if better opportunities arise, so I do expect there to be at least some effect.

Linch's Shortform
11. I gave significant amounts of money to the Long-Term Future Fund (which funded Catalyst), so I'm glad Catalyst turned out well. It's really hard to forecast the counterfactual success of long-reach plans like this one, but naively it looks like this seems like the right approach to help build out the pipeline for biosecurity.

I am glad to hear that! I sadly didn't end up having the time to go, but I've been excited about the project for a while.

Thoughts on The Weapon of Openness
though it's important to note that that technique was developed at IBM, and then given to the NSA, and not developed internally at the NSA.

So I think this is actually a really important point. I think by default the NSA can contract out various tasks to industry professionals and academics and on average get results back from them that are better than what they could have done internally. The differential cryptoanalysis situation is a key example of that. IBM could have instead been contracted by some random other group and developed the technology for them instead, which means that the NSA had basically no lead in cryptography over IBM.

Thoughts on The Weapon of Openness

Even if all of these turn out to be quite significant, that would at most imply a lead of something like 5 years.

The elliptic curve one doesn't strike me at all like the NSA had a big lead. You are probably referring to this backdoor:

https://en.wikipedia.org/wiki/Dual_EC_DRBG

This backdoor was basically immediately identified by security researchers the year it was embedded in the standard. As you can read in the Wikipedia article:

Bruce Schneier concluded shortly after standardization that the "rather obvious" backdoor (along with other deficiencies) would mean that nobody would use Dual_EC_DRBG.

I can't really figure out what you mean by the DES recommended magic numbers. There were some magic numbers in DES that were used for defense against the differential cryptanalysis technique. Which I do agree is probably the single strongest example we have of an NSA lead, though it's important to note that that technique was developed at IBM, and then given to the NSA, and not developed internally at the NSA.

To be clear, a 30 (!) year lead seems absolutely impossible to me. A 3 year broad lead seems maybe plausible to me, with a 10 year lead in some very narrow specific subset of the field that gets relatively little attention (in the same way research groups can sometimes pull ahead in a specific subset of the field that they are investing heavily in).

I have never talked to a security researcher who would consider 30 years remotely plausible. The usual impression that I've gotten from talking to security researchers is that the NSA has some interesting techniques and probably a variety of backdoors, which they primarily installed not by technological advantage but by political maneuvering, but that in overall competence they are probably behind the academic field, and almost certainly not very far ahead.

Thoughts on The Weapon of Openness
past leaks and cases of "catching up" by public researchers that they are roughly 30 years ahead of publicly disclosed cryptography research

I have never heard this and would extremely surprised by this. Like, willing to take a 15:1 bet on this, at least. Probably more.

Do you have a source for this?

How do you feel about the main EA facebook group?

Do you have the same feeling about comments on the EA Forum?

Request for Feedback: Draft of a COI policy for the Long Term Future Fund
Separately, you mentioned OpenPhil's policy of (non-) disclosure as an example to emulate. I strongly disagree with this, for two reasons.

This sounds a bit weird to me, given that the above is erring quite far in the direction of disclosure.

The specific dimension of the OpenPhil policy that I think has strong arguments going for it is to be hesitant with recusals. I really want to continue to be very open with our Conflict of Interest, and wouldn't currently advocate for emulating Open Phil's policy on the disclosure dimension.

Request for Feedback: Draft of a COI policy for the Long Term Future Fund
I didn't see any discussion of recusal because the fund member is employed or receives funds from the potential grantee?

Yes, that should be covered by the CEA fund policy we are extending. Here are the relevant sections:

Own organization: any organization that a team member
is currently employed by
volunteers for
was employed by at any time in the last 12 months
reasonably expects to become employed by in the foreseeable future
does not work for, but that employs a close relative or intimate partner
is on the board of, or otherwise plays a substantially similar advisory role for
has a substantial financial interest in

And:

A team member may not propose a grant to their own organization
A team member must recuse themselves from making decisions on grants to their own organizations (except where they advocate against granting to their own organization)
A team member must recuse themselves from advocating for their own organization if another team member has proposed such a grant
A team member may provide relevant information about their own organization in a neutral way (typically in response to questions from the team’s other members).

Which covers basically that whole space.

Note that that policy is still in draft form and not yet fully approved (and there are still some incomplete sentences in it), so we might want to adjust our policy above depending on changes in the the CEA fund general policy.

Request for Feedback: Draft of a COI policy for the Long Term Future Fund

Responding on a more object-level:

As an obviously extreme analogy, suppose that someone applying for a job decides to include information about their sexual history on their CV.

I think this depends a lot on the exact job, and the nature of the sexual history. If you are a registered sex-offender, and are open about this on your CV, then that will overall make a much better impression than if I find that out from doing independent research later on, since that is information that (depending on the role and the exact context) might be really highly relevant for the job.

Obviously including potentially embarrassing information in a CV without it having much purpose is a bad idea, and mostly signals various forms of social obliviousness, as well as distract from the actually important parts of your CV, which pertain to your professional experience and factors that will likely determine how well you will do at your job.

But I'm inclined to agree with Howie that the extra clarity you get from moving beyond 'high-level' categories probably isn't all that decision-relevant.

So, I do think this is probably where our actual disagreement lies. Of the most concrete conflicts of interest that have given rise to abuses of power I have observed both within the EA community, and in other communities, more than 50% where the result of romantic relationships, and were basically completely unaddressed by the high-level COI policies that the relevant institutions had in place. Most of these are in weird grey-areas of confidentiality, but I would be happy to talk to you about the details of those if you send me a private message.

I think being concrete here is actually highly action relevant, and I've seen the lack of concreteness in company policies have very large and concrete negative consequences for those organizations.

Request for Feedback: Draft of a COI policy for the Long Term Future Fund
less concrete terms is mostly about demonstrating an expected form of professionalism.

Hmm, I think we likely have disagreements on the degree to which I think at least a significant chunk of professionalism norms are the results of individuals trying to limit accountability of themselves and people around them. I generally am not a huge fan of large fractions of professionalism norms (which is not by any means a rejection of all professionalism norms, just specific subsets of it).

I think newspeak is a pretty real thing, and the adoption of language that is broadly designed to obfuscate and limit accountability is a real phenomenon. I think that phenomenon is pretty entangled with professionalism. I agree that there is often an expectation of professionalism, but I would argue that exactly that expectation is what often causes obfuscating language to be adopted. And I think this issue is important enough that just blindly adopting professional norms is quite dangerous and can have very large negative consequences.

Load More