Rubi J. Hudson

466Joined Dec 2021

Comments
22

Seems worth asking in interviews "I'm concerned about advancing capabilities and shortening timelines, what actions is your organization taking to prevent that", with the caveat that you will be BSed.

Bonus: You can turn down roles explicitly because they're doing capabilities work, which if it becomes a pattern may incentivize them to change their plan.

This comment is object-level, perhaps nitpicky, and I quite like your post on a high level.

Saving a life via, say, malaria nets gets you two benefits:

1. The person saved doesn't die, meeting their preference for continuing to exist

2. The externalities of that person continuing to live, such as foregone grief by their family and community.

I don't think it's too controversial to say that the majority of the benefit from saving a life goes to the person whose life is saved, rather than the people who would be sad that they died. But the IDinsights survey only provides information about the latter.

Consider what would happen if beneficiary surveys find the opposite conclusion in future communities, that certain beneficiaries did not care at all about the death of children under the age of 9. It would be ridiculous and immoral  to defer to that decision, and not provide any life-saving aid to those children.  The reason for this is that the community being surveyed is not the primary beneficiary of aid to their children, their children are, so their preferences make up a small fraction of the aid's value. But this also goes the other way, if the surveyed community overweights the lives of their children, that isn't a reason for major deferral. Especially if stated preferences contradict revealed preferences, as they often do.

There are lots of advantages to being based in the Bay Area. It seems both easier and higher upside to solve the Berkeley real estate issue that to coordinate a move away from the Bay Area.

I love the idea of a Library of EA! It would be helpful to eventually augment it with auxiliary and meta-information, probably through crowdsourcing among EAs. Each book could also be associated with short and medium summaries of the key arguments and takeaways, and warnings about which sections were later disproven or controversial (or a warning that the whole thing is a partial story/misleading). There's also a lot of overlap and superseding within the books (especially within the rationality and epistemology section), so it would be good to say "If you've read X, you don't need to read Y". It would also be great to have a "Summary of Y for people who have already read X" that just covers the key information.

I do strongly feel that a smaller library would be better. While there are advantages to being comprehensive, a smaller library is better at directing people to the most important books. It is really valuable to say that someone should start with a particular book on a subject, rather than their uninformed choice from a list. Parsimony in recommendations, at least on a personal level, is also important for conveying the importance of the recommendations you do make. It somewhat feels like you weren't confident enough to cut a book that was recommended by some subgroup, even if there were better options available.

There's a Pareto principle at play here, where reading 20% of the books will provide 80% of the value, and a repeated Pareto principle where 4% provide 64% of the value.  I think you could genuinely recommend four or five books from this list that provide two-thirds of the EA value of the entire list between them.  My picks would be The Most Good You Can Do, The Precipice,  Reasons and Persons, and Scout Mindset.  Curious what others would pick.

In addition to EAG SF, there are some other major events and a general concentration of EAs happening in this 2-week time span in the Bay Area, so it might be generally good to come to the Bay around this time. 

Which other events are happening around that time? 

their approaches are correlated with each other. They all relate to things like corrigibility, the current ML paradigm, IDA, and other approaches that e.g. Paul Christiano would be interested in.

You need to explain better how these approaches are correlated, and what an uncorrelated approach might look like. It seems to me that, for example, MIRI's agent foundations and Anthropic's prosaic interpretability approaches are wildly different!

By the time you get good enough to get a grant, you have to have spent a lot of time studying this stuff. Unpaid, mind you, and likely with another job/school/whatever taking up your brain cycles.

I think you are wildly underestimating how easy it is for broadly competent people with an interest in AI alignment  but no experience to get funding to skill up. I'd go so far as to say it's a strength of the field.

I think your "digital people lead to AI" argument is spot on, and basically invalidates the entire approach. I think getting whole brain emulation working before AGI is such a longshot that the main effect of investing in it is advancing AI capabilities faster.

Hopefully one day they grow big enough to hire an executive assistant.

While I'm familiar with literature on hiring, particularly unstructured interviews, I think EA organizations should give serious consideration to the possibility that they can do better than average. In particular, the literature is  correlational, not causal, with major selection biases, and is certainly not as broadly applicable as authors claim.

From Cowen and Gross's book Talent, which I think captures the point I'm trying to make well:
> Most importantly, many of the research studies pessimistic about interviewing focus on unstructured interviews performed by relatively unskilled interviewers for relatively uninteresting, entry-level jobs. You can do better. Even if it were true that interviews do not on average improve candidate selection, that is a statement about averages, not about what is possible. You still would have the power, if properly talented and intellectually equipped, to beat the market averages. In fact, the worse a job the world as a whole is at doing interviews, the more reason to believe there are highly talented candidates just waiting to be found by you.

The fact that EA organizations are looking for specific, unusual qualities,  and the fact that EAs are generally smarter and more perceptive than the average hiring committee are both strong reasons to think that EA can beat the average results from research that tells only a partial story.

One of the keys things you hit on is  "Treating expenditure with the moral seriousness it deserves. Even offhand or joking comments that take a flippant attitude to spending will often be seen as in bad taste, and apt to turn people off."

However, I wouldn't characterize this as an easy win, even if it would be an unqualified positive. Calling out such comments when they appear is straightforward enough, but that's a slow process that could result in only minor reductions. I'd be interested in hearing ideas for how to change attitudes more thoroughly and quickly, because I'm drawing a blank.

Load More