Rubi

Posts

Sorted by New

Topic Contributions

Comments

Hiring: How to do it better

While I'm familiar with literature on hiring, particularly unstructured interviews, I think EA organizations should give serious consideration to the possibility that they can do better than average. In particular, the literature is  correlational, not causal, with major selection biases, and is certainly not as broadly applicable as authors claim.

From Cowen and Gross's book Talent, which I think captures the point I'm trying to make well:
> Most importantly, many of the research studies pessimistic about interviewing focus on unstructured interviews performed by relatively unskilled interviewers for relatively uninteresting, entry-level jobs. You can do better. Even if it were true that interviews do not on average improve candidate selection, that is a statement about averages, not about what is possible. You still would have the power, if properly talented and intellectually equipped, to beat the market averages. In fact, the worse a job the world as a whole is at doing interviews, the more reason to believe there are highly talented candidates just waiting to be found by you.

The fact that EA organizations are looking for specific, unusual qualities,  and the fact that EAs are generally smarter and more perceptive than the average hiring committee are both strong reasons to think that EA can beat the average results from research that tells only a partial story.

EA and the current funding situation

One of the keys things you hit on is  "Treating expenditure with the moral seriousness it deserves. Even offhand or joking comments that take a flippant attitude to spending will often be seen as in bad taste, and apt to turn people off."

However, I wouldn't characterize this as an easy win, even if it would be an unqualified positive. Calling out such comments when they appear is straightforward enough, but that's a slow process that could result in only minor reductions. I'd be interested in hearing ideas for how to change attitudes more thoroughly and quickly, because I'm drawing a blank.

Aaron Gertler's Shortform

I like many books on the list, but I think you're doing a disservice by trying to recommend too  many books at once. If you can cut it down to 2-3 in each category, that gives people a better starting point.

Big EA and Its Infinite Money Printer

It's a bit surprising, but not THAT surprising. 50 more technical AI safety researchers would represent somewhere from a 50-100% increase  in the total number, which could be a justifiable use of 10% of OpenPhil's budget.

Big EA and Its Infinite Money Printer

Great writeup! 

Is there an OpenPhil source for "OpenPhil values a switch to an AI safety research career as +$20M in expected value"? It would help me a lot in addressing some concerns that have been brought up in local group discussions.

Free-spending EA might be a big problem for optics and epistemics

Even before a cost-benefit analysis, I'd like to see an ordinal ranking of priorities. For organizations like the CEA,  what would they do with a 20% budget increase? What would they cut if they had to reduce their budget by 20%? Same thing for specific events, like EAGs. For a student campus club, what would they do with $500 in funding? $2,000? $10,000? I think this type of analysis would be helpful for determining if some of the spending that appears more frivolous is actually the least important.

Democratising Risk - or how EA deals with critics

To clear up my identity, I am not Seán and do not know him. I go by Rubi in real life, although it is a nickname rather than my given name. I did not mean for my account to be an anonymous throwaway, and I intend to keep on using this account on the EA Forum. I can understand how that would not be obvious as this was my first post, but that is coincidental. The original post generated a lot of controversy, which is why I saw it and decided to comment.

You spoke to 20+ reviewers, half of which were sought out to disagree with you, and not a single one could provide a case for differential technology?

I would have genuinely liked an answer to this. If none of the reviewers made the case, that is useful information about the selection of the reviewers. If some reviewers  did, but were ignored by the authors, then it reflects negatively on the authors not to address this and say that the case for differential technology is unclear.

Democratising Risk - or how EA deals with critics

Hi Carla,

Thanks for taking the time to engage with my reply. I'd like to engage with a few of the points you made.

First of all, my point prefaced with 'speaking abstractly' was genuinely that. I thought your paper was poorly argued, but certainly within acceptable limits that it should not result in withdrawn funding. On a sufficient timeframe, everybody will put out some duds, and your organizations certainly have a track record of producing excellent work. My point was about avoiding an overcorrection, where consistently low quality work is guaranteed some share of scarce funding merely out of fear that withdrawing such funding would be seen as censorship. It's a sign of healthy epistemics (in a dimension orthogonal to the criticisms of your post) for a community to be able to jump from a specific discussion to the general case, but I'm sorry you saw my abstraction as a personal attack.

You saw "we do not argue against the TUA, but point out the unanswered questions we observed. .. but highlight assumptions that may be incorrect or smuggle in values".  Pointing out unanswered questions and incorrect assumptions is how you argue against something! What makes your paper polemical is that you do not sufficiently check whether the questions  really are unanswered, or if the assumptions really are incorrect. There is no tension between calling your paper polemical and saying you do not sufficiently critique the TUA.  A more thorough critique that took counterarguments seriously and tried to address them would not be a polemic, as it would more clearly be driven by truth-seeking than hostility.

I was not "asking that we [you] articulate and address every hypothetical counterargument", I was asking that you address any, especially the most obvious ones. Don't just state "it is unclear why" they are believed to skip over a counterargument.

I am disappointed that you used my original post to further attack the epistemics of this community, and doubly so for claiming it failed to articulate clear, specific criticisms. The post was clear that the main failing I saw in your paper was a lack of  engagement with counterarguments, specifically the case for technological differentiation and the case for avoiding the disenfranchisement of future generations through a limited democracy. I do not believe that my criticism of the paper jumping around too much rather than engaging deeply on fewer issues was ambiguous either.  Ignoring these clear, specific criticisms to use the post as evidence of poor epistemics in the EA community makes me think you may be interpreting any disagreement as evidence for your point.

Load More