All of dan.pandori's Comments + Replies

There are a lot of 'lurkers', but less than 30 folks would be involved in the yearly holiday matching thread and sheet. Every self-professed EA I talked to at Google was involved in those campaigns, so I think that covers the most involved US Googlers.

Most people donated closer to 5-10% than Jeff or Oliver's much higher amounts, that is for sure true.

So I think both your explanations are true. There are not that many EAs at Google (although I don't think that's surprising), and most donate much less than they likely could. I put myself in that bucket, as ... (read more)

RE: why aren't there as many EAs giving this much money: I'm (obviously) not Jeff, but I was at Alphabet for many of the years Jeff was. Relevantly, I was also involved in the yearly donation matching campaigns. There were around 2-3 other folks who donated similar amounts to Jeff. Those four-ish people were the majority of EA matching funds at Alphabet.

It's hard to be sure how many people actually donated outside of giving campaigns, so this might undercount things. But to get to 1k EAs donating this much money, you'd need like 300 companies with similarl... (read more)

4
Brendan Long
5mo
I'm curious, since EA's are concentrated in the same places that big tech companies are: Is it that surprisingly few EA's work at Google, or are there a lot and they just mostly donate like 10% of their salaries instead of 50%?
4
AnonymousTurtle
5mo
I don't understand this reply. It seems to say that few people are donating as much as Jeff because Jeff is a strong outlier, which seems to be a tautology, what am I missing? Or you'd need a 30 times larger EA contingent at Alphabet and at 10 other high-paying companies. Why aren't more people donating 50%?

Legal or constitutional infeasibility does not always prevent executive orders from being applied (or followed). I feel like the US president declaring a state of emergency related to AI catastrophic risk (and then forcing large AI companies to stop training large models) sounds at least as constitutionally viable as the attempted executive order for student loan forgiveness.

I agree that this seems fairly unlikely to happen in practice though.

3
dEAsign
7mo
Thanks, the replicated posts is a mistake and I don't know why that happened. I think they're deleted now.

I deeply appreciate the degree to which this comment acknowledges issues and provides alternative organizations that may be better in specific respects. It has given me substantial respect for LTFF.

This feels like a "be the change you want to see in the world" moment. If you want such an event, it seems like you could basically just make a forum post (or quick take) offering 1:1s?

1
nananana.nananana.heyhey.anon
7mo
That seems much less good than appearing in the SwapCard list of attendees where everyone is scheduling 1:1s already, but I agree that a cheap version of the thing here is very doable even without SwapCard

I think that basically all of these are being pursued and many are good ideas. I would be less put off if the post title was 'More people should work on aligning profit incentives with alignment research', but suggesting that no one is doing this seems off base.

This is what I got after a few minutes of Google search (not endorsing any of the links beyond that they are claiming to do the thing described).

AI Auditing:
https://www.unite.ai/how-to-perform-an-ai-audit-in-2023/

Model interpretability:
https://learn.microsoft.com/en-us/azure/machine-learning/how-to-... (read more)

I agree that 'utilitarianism' often gets elided into meaning a variation of hedonic utilitarianism. I would like to hold philosophical discourse to a higher bar. In particular, once someone mentions hedonic utilitarianism, I'm going to hold them to the standard of separating out hedonic utilitarianism and preference utilitarianism, for example.

I agree hedonic utilitarians exist. I'm just saying the utilitarians I've talked to always add more terms than pleasure and suffering to their utility function. Most are preference utilitarians.

3
spencerg
9mo
Preference utilitarianism and valuism don't have much in common. Preference utilitarianism: maximize the interests/preferences of all beings impartially. First, preferences and intrinsic values are not the same thing. For instance, you may have a preference to eat Cheetos over eating nachos, but that doesn't mean you intrinsically value eating Cheetos or that eating Cheetos necessarily gets you more of what you intrinsically value than eating nachos will. Human choice is driven by a lot of factors other than just intrinsic values (though intrinsic values play a role). Second, preference utilitarianism is not about your own preferences, it's about the preferences of all beings impartially.

I feel like 'valuism' is redefining utilitarianism, and the contrasts to utilitarianism don't seem very convincing. For instance, you define valuism as noticing what you intrinsically value and trying to take effective action to increase that. This seems identical to a utilitarian whose utility function is composed of what they intrinsically value.

I think you might be defining utilitarianism such that they are only allowed to care about one thing? Which is sort of true, in that utilitarianism generally advocates converting everything into a common scale, b... (read more)

6
Brad West
9mo
You're the one who's redefining utilitarianism- which is commonly defined as maximization of happiness and well-being of conscious beings. You can consider integrating other terminal values into what you'd like to do, but you're not really discussing utilitarianism at that point, as it's commonly used. For instance, Greenberg points to truth as a potential terminal value, which would be at odds with utilitarianism as it's typically used. I think Singer is a hedonic utilitarian for what it's worth, and I think I subscribe to it while acknowledging that weighing the degrees of positive and negatively subjective experiences of many kinds is daunting. As for having other instrumental values (which is why I don't really think the "burnout" argument is very good as against utilitarianism, I agree with you on that one.

This comment came across as unnecessarily aggressive to me.

The original post is a newsletter that seems to be trying to paint everyone in their best light. That's a nice thing to do! The epistemic status of the post (hype) also feels pretty clear already.

Yeah, I hear you. [Edit: well, I think it was the least aggressive way of saying what I wanted to say.]

(I note that in addition to hyping the post is kinda making an ask for funding for the three projects it mentions--"Some of our favorite proposals which could use more funding"--and I'm pretty uncomfortable with one-sided-ness in funding-asks.)

Thank them for the comment, and then link to this thread?

2
Yonatan Cale
9mo
You, sir, get meta points

As someone who went through the CEA application process, I wholeheartedly endorse this. I was also really impressed with CEA's approach the process, and their surprising willingness to give feedback & advice through it.

[It ended up being a mutually bad fit. I've spent my whole career as a C++ backend engineer at a FAANG and I like working in person, and that doesn't align super well with a small remote-first org that has a lot of frontend needs.]

It feels weird to me to hear that something is terrible to think. It might be terrible that we're only alive because everyone doesn't have the option to kill everyone else instantly, but it's also true. Thinking true thoughts isn't terrible.

If everyone has a button that could destroy all life on the planet, I feel like it's unrealistic to expect that button to remain unpressed for more than a few hours. The most misanthropic person on Earth is very, very misanthropic. I'm not confident that many people would press the button, but the whole thing is that it... (read more)

If AI + a nontechnical person familiar with business needs can replace me in coding, I expect something resembling a singularity within 5 years.

I think that software engineering is a great career if you have an aptitude for it. It's also way easier to tell if you are good at it relative to most other careers (ie, Leetcode, Hackerrank, and other question repositories can help you understand your relative performance).

So my answer is that either AI can't automate software engineers for a while, or they'll automate every career quite soon after software engin... (read more)

1
justaperson
1y
Thanks for that perspective. Given that I don't have experience in the programming space, I couldn't project a timeline between fully automated software production and AGI -- but your estimate puts something on the map for me. It is disconcerting though, as there are many different assumptions and perspectives about AGI,  and a lot of uncertainty. But I also understand that certainty isn't something I should expect on any topic -- let alone this one . Moreover, career inaction isn't an option I can afford, so I'll likely be barreling down the software dev path very soon.

IANAL: I view 'effective altruism' to not be owned, and if any organization claims to own the term I'm going to ignore them. I expect most folks to share my opinion here.

Agreed with the specific reforms. Blind hiring and advertising broadly seem wise.

It's hard to be above baseline for multiple dimensions, and eventually gets impossible.

Oh for sure, I wasn't thinking you were implying making it a requirement. I was trying to say that even a nudge towards explaining downvotes is a nudge towards evil (for me).

Maybe the net advantage of explaining downvotes would be good, but I personally should probably be discouraged from explaining my downvotes.

I disagree and I downvoted this because explaining why you downvoted something is disproportionately likely to end up with me arguing with someone on the internet. I find this really unpleasant.

I'm happy to have a rule for giving an explanation to you if I downvote your posts. I've talked with you as a person outside of internet arguments, so I'm not as worried about getting into a protracted argument.

But as a general rule, I think I should be discouraged from explaining my downvotes so that I keep up my mental health.

Separately, if this was a thread that ... (read more)

3
Matt Goodman
1y
Props for taking the time to explain, even though you don't like it!
3
Yonatan Cale
1y
Upvoted since you explained why you don't like my idea, and I like that! :)
6
Yonatan Cale
1y
Hey (:   To be clear, my feature suggestion is something like a popup reading "you downvoted this, consider explaining why" as opposed to "in order to downvote this, you MUST explain why".   The pain point I'm trying to solve is "I don't know why people down vote my comments sometimes and it makes me sad and confused". Maybe my specific proposed solution isn't good; my pain point remains, though   I also acknowledge that "explaining why I downvoted" can lead into arguing-on-the-internet which could be negative in a way that I want to avoid (and I don't want to drag people into).

Fair point! I was assuming that by collective decision making you meant much closer to 1 person 1 vote, but if it's well defined term I'm not sure of the definition.

I haven't heard much discussion on a market-based feedback system, and I'd be very interested in seeing it tried. Perhaps for legal or technical reasons it wouldn't work out super well (similar to current prediction markets), but it seems well worth the experiment.

I think that this incorrectly conflates prediction markets and collective decision making.

(Prediction) markets are (theoretically) effective because folks that are able to reliably predict correctly will end up getting more money, and there are incentives in place for correct predictions to be made. It seems that the incentives for correct decision making are far weaker in collective decision making, and I don't see any positive feedback loop where folks that are better at noticing impactful projects will get their opinions weighted more highly.

I think tha... (read more)

4
samuel
1y
Thanks for the feedback Dan. Maybe I'm using the vocabulary incorrectly - does collective specifically mean 1 person 1 vote?  I do specifically avoid saying democratic and mention market-based decision making in the first sentence.  It's not at all obvious to me that putting market-based feedback systems in place would look like the funding situation today. I think it's worth pushing back on the assumption that EA's current funding structure rewards the best performers in terms of asset allocation.

While I agree with this question in the particular, there's a real difficulty because absence of evidence is only weak evidence of absence with this kind of thing.

Can you elaborate? I don't understand what problem this solves.

This post makes it harder than usual for me to tell if I'm supposed to upvote something because it is well-written, kind, and thoughtful vs whether I agree with it.

I'm going to continue to use up/downvote for good comment/bad comment and disagree/agree for my opinion on the goodness of the idea.

[EDIT: addressed in the comments. Nathan at least seems to endorse my interpretation]

Thanks, I think this is an excellent response and I agree both are important goals.

I'm curious to learn more about why you think that steelmanning is good for improving one's beliefs/impact. It seems to me that that would be true if you believe yourself to be much more likely to be correct than the author of a post. Otherwise, it seems that trying to understand their original argument is better than trying to steelman it.

I could see that perhaps you should try to do both (ie, both the author's literal intent and whether they are directionally correct)?

[EDI... (read more)

I'd be interested to see some of those tried for sure!

I imagine you'd also likely agree that these proposals tradeoff against everything else that the EA orgs could be doing, and it's not super clear any are the best option to pursue relative to other goals right now.

3
Ozzie Gooen
1y
Of course. Very few proposals I come up with are a good idea for myself, let alone others, to really pursue. 

I agree. I think that it's incredibly difficult to have civil conversations on the internet, especially about emotionally laden issues like morality/charity.

I feel bad when I write a snotty comment and that gets downvoted, and that has a real impact on me being more likely to write a kind argument in one direction rather than a quick zinger. I am honestly thankful for this feedback on not being a jerk.

Do you think that group bargaining/voting in EA would be a good thing for funding/prioritization?

I personally like the current approach that has individual EAs and orgs make their own decisions on what is the best thing to do in the world.

For example, I would be unlikely to fund an organization that the majority of EAs in a vote believed should be funded, but I personally believed to be net harmful. Although if this situation were to occur, I would try to have some conversations about where the wild disagreement was stemming from.

I think there's probably a bunch of different ways to incorporate voting. Many would be bad, some good. 

Some types of things I could see being interesting:

  • Many EAs vote on "Community delegates" that have certain privileges around EA community decisions.
  • There could be certain funding groups that incorporate voting, roughly in proportion to the amounts donated. This would probably need some inside group to clear funding targets (making sure they don't have any confidential baggage/risks) before getting proposed.
  • EAs vote directly on new potential EA Foru
... (read more)

In the interests of taking your words to heart, I agree that EAs (and literally everyone) are bad at steelmanning criticisms.

 

However, I think that saying the 'and literally everyone' part out loud is important.  Usually when people say 'X is bad at Y' they mean that X is worse than typical at Y. If I said, 'Detroit-style pizza is unhealthy,' then there is a Gricean implicature that Detroit-style pizza is less healthy than other pizzas. Otherwise, I should just say 'pizza is unhealthy'.

Likewise, when you say 'EAs seem particularly bad at steelman... (read more)

3
freedomandutility
1y
Apologies, I don’t mean to imply that EA is unique in getting things wrong / being bad at steelmanning. Agree that the “and everyone else” part is important for clarity. I think whether steelmanning makes sense depends on your immediate goal when reading things. If the immediate goal is to improve the accuracy of your beliefs and work out how you can have more impact, then I think steelmanning makes sense. If the immediate goal is to offer useful feedback to the author and better understand the author’s view, steelmanning isn’t a good idea. There is a place for both of these goals, and importantly the second goal can be a means to achieving the first goal, but generally I think it makes sense for EAs to prioritise the first goal over the second.

Possibly high effort, but what do you see as the best 10% (and worst 10%)?

[aside, made me chuckle]

This is an inevitable issue with the post being 70 pages long.

I think online discussions are more productive when its clear exactly what is being proposed as good/bad, so I appreciate you separately commenting on small segments (which can be addressed individually) rather than the post as a whole.

Thanks for including this! I really liked the shrimp sticker, and partly I liked it because it simply came across as friendly. I honestly didn't know that live shrimp have different ordinary posture and color compared to cooked shrimp, and that makes the sticker feel a lot less friendly to me!

I'd ideally like a sticker with what looks like a happy shrimp.  A live shrimp in a circle with something like 'expanding the moral circle' feels like almost exactly the vibe I'd love to send out, for what it's worth.

Separately, I get that making merch/art/anything like this is difficult, so I appreciate the work that has already gone into putting the store together.

I wanted to mention that I went through the first week's lectures and exercises and I was really impressed at the quality!

Also a software engineer, and this also is a pretty spot on description for me. 25 hours of productive work is about my limit before I start burning out and making dumb mistakes.

I instead read this point as saying "assume that if we persuaded 100 folks to give up carp, then 1 of those would replace their carp consumption with salmon." So it's talking about the replacement effect, rather than the number persuaded (the latter gives magnitude, as you say).

3
NunoSempere
1y
I see, that makes some sense

You can see how that the lack of details is basically asking me to... trust you without evidence?

Edit: to use less 'gotcha phrasing, anonymously claiming that another organization is doing better on feedback, but not telling me how, is asking for me to blindly trust you for very little reason.

I don't think feedback practices are widely considered secrets that have to be protected, and if your familiarity is with the UK Civil Service, that's a massive organization where you can easily give a description without unduly narrowing yourself down.

This thread is the kind of tiring back and forth I'm talking about. Please, try organizing feedback for 5k+ rejected applicants for something every year and then come back to tell me why I'm wrong and it really is easy. I promise to humbly eat crow at that time.

3[anonymous]1y
For what it's worth, I'm also feeling quite frustrated. I've been repeatedly giving you details of how an organization I'm very familiar with (can't say more without compromising anonymity) did exactly what you claim is so difficult, and nothing seems to get through. I won't trouble you with further replies in this thread :-)

Phone calls for me are socially awkward and I generally want some time to privately process rejection rather than immediately need to have a conversation about it. Also I generally keep my phone at home during business hours so it's quite likely I'd need to spend half an hour playing phone tag.

1
Richard Möhn
1y
Good to know, thanks! For completeness, my idea of a rejection phone call (derived from https://www.manager-tools.com/2014/11/how-turn-down-job-candidate-part-1) is: * You call, greet the person, say in the first sentence that you won't be making an offer, say a few more short sentences, react to any responses, then hang up. You don't make it a conversation. The important thing is that they hear your voice. * It's fine to speak on voicemail and for the other person not to call back. This avoids phone tag. Note that Manager Tools doesn't always have to most airtight arguments, but they tend to have tested their core guidance (which includes hiring) empirically.

Was going to say the same. I've only ever been rejected over email (or ghosted entirely). I would also find it off-putting to get a phone call rejection. I guess organizations can choose to call if they wanted, but I wouldn't personally encourage it.

5
Linch
1y
What we did at RP Longtermism's most recent hiring rounds (not sure if it's applicable to other departments/teams) is send rejections via email and offer rejected final round candidates a chance to call with someone on the team if they wanted to. This lets candidates opt in to talk more with team members if and only if they wanted to, and also do so at their own pace so they're emotionally ready to call when ready.
1
Richard Möhn
1y
What would you find off-putting about it?

A hypothetical example that I would view as asking for trust would be someone telling me not to join an organization, but not telling me why. Or claiming that another person shouldn't be trusted, without giving details. I personally very rarely see folks do this. An organization doing something different and explaining their reasoning (ex. giving feedback was not viewed as not a good ROI) is not asking for trust.

Regarding why giving feedback at scale is hard, most of these positions have at best vague evaluation metrics which usually bottom out in "help t... (read more)

1[anonymous]1y
I don't think CEA should share specific criteria. I think they should give rejects brief, tentative suggestions of how to develop as an EA in ways that will strengthen their application next time. Growth mindset over fixed mindset. Even a completely generic "maybe you should get 80K advising" message for every reject would go a long way. Earlier in this thread, I claimed that senior EAs put very little trust in junior EAs. The Goodharting discussion illustrates that well. The assumption is that if feedback is given, junior EAs will cynically game the system instead of using the feedback to grow in good faith. I'm sure a few junior EAs will cynically game the system, but if the "cynical system-gaming" people outweigh the "good faith career growth" people, we have much bigger problems than feedback. (And such an imbalance seems implausible in a movement focused on altruism.) I'd argue that lack of feedback actually invites cynical system-gaming, because you're not giving people anywhere productive to direct their energies. And operating in a low-trust regime invites cynicism in general. Make it clear you won't go back and forth this way. This post explains why giving feedback is so important. If 5 minutes of feedback makes the difference for a reject getting bummed out and leaving the EA movement, it could be well worthwhile. My intuition is that this happens quite a bit, and CEA just isn't tracking it. Re: making a stink -- the person who's made the biggest stink in EA history is probably Émile P. Torres. If you read the linked post, he seems to be in a cycle of: getting rejected, developing mental health issues from that, misbehaving due to mental health issues, then experiencing further rejections. (Again I refer you to the "Cost of Rejection" post -- mental health issues from rejection seem common, and lack of feedback is a big factor. As you might've guessed by this point, I was rejected for some EA stuff, and the mental health impact was much larger and lon

I guess I read that as a description of what they're doing rather asking me to trust them. CEA can choose the admission criteria they want, and after attending my first EAG earlier this year I felt like whatever criteria they were using seemed to broadly make for a valuable event for me as an attendee.

I think you're really underestimating how hard giving useful feedback at scale is and how fraught it is. I would be more sympathetic if you were running or deeply involved with an organization that was doing better on this front. If you are, congrats and I am appreciative!

1[anonymous]1y
It's a description of how they're going to be less transparent. I think that's about as good as we can get, because if they hadn't described how they were going to be less transparent, there would be no post to share! All I'd be able to say is "they have a secretive vibe" or something like that, which seems unsatisfactory. (I do think the "secretive vibe" thing is true though -- if we stop looking at posts that are published, and start looking at posts that aren't published, I'd say the ratio of forum posts written by leadership in the past year to the number of EAs in leadership roles is quite low. Holden Karnofsky would be a notable exception here.) So, I'm not sure what would qualify as an answer to your "examples where you feel senior EAs are asking folks to trust them without evidence" query at this point? You don't seem to think either the "Keep EA high-trust" post or the "How CEA approaches applications to our programs" post qualifies. Sounds like you have private info that they're trustworthy in this case. That's great, but the post still represents senior EAs asking people to trust them without evidence. It's not necessarily bad for senior EAs to be trusted, but I do think there's a severe trust imbalance and it's causing significant problems. Can you explain why you think it's hard? I am very familiar with a particular organization that does feedback at scale, like the UK Civil Service -- that's the basis for my claims in this thread. I think maybe American organizations just aren't in the habit of giving feedback, and assume it's much more difficult/fraught than it actually is. I think CEA's mistake was "getting into the weeds". Simply copy/pasting boilerplate relevant to a particular application is a massive improvement compared to the baseline of no feedback. Categorize a random sample of rejected applications and for each category, identify an "EA virtue" those rejects failed to demonstrate. Compose polite lawyer-approved boilerplate for each vir

Separately regarding trust, I don't feel obligated to trust senior EAs. I sometimes read the analyses of senior EAs and like them, so I start to trust them more. Trust based on seniority alone seems bad, could you give some examples where you feel senior EAs are asking folks to trust them without evidence?

1[anonymous]1y
How about a post like this one? It's not an analysis. It's an announcement from CEA that says they're reducing transparency around event admissions. There may be evidence that CEA is generally trustworthy, but the post doesn't give any evidence that they're doing a good job with event admissions in particular. [In fact, it updates me in the opposite direction. My understanding of psychology research (e.g. as summarized by Daniel Kahneman in Thinking Fast and Slow) is that a decision like EAG admission will be made most effectively using some sort of statistical prediction rule. CEA doesn't give any indication that it is even collecting a dataset with which to find such a rule. The author of the post essentially states that CEA is still in the dark ages of making a purely subjective judgement.]

I am telling you what Google told me (and continues to tell new interviewers) as part of its interview training. You may believe that you know the law better than Google, but I am too risk averse to believe that I know the law better than them.

3
mikko
1y
The legal risks consist almost entirely of situations where there is reasonable cause to suspect that the applicant has been discriminated due to some protected characteristic. In these situations the hiring party is incentivized to maximally control information in order to minimize potential evidence. Feedback could act as legal ammunition for the benefit of the discriminated candidate. Because hiring organisations gain very little from giving feedback and instead lose time, effort, and assume more risk when doing it; it's very common to forbid recruiters and interviewers from giving feedback entirely. Exaggerating the legal risks provides an effective explanation for doing this. The rule is typically absolute because otherwise recruiters may be tempted to give feedback out of niceness or a desire to help rejected candidates. Also, Google's interpretation of the law is almost certainly made from Google's perspective and for Google's benefit — not from the perspective of what is the desired outcome of the law; or even more importantly, what is the underlying issue and how should we be trying to solve it to make the world better.
2[anonymous]1y
Google is generally quite risk-averse. My guess is that they don't give feedback because that is the norm for American companies, and because there is no upside for them. I'd be surprised if their lawyers put more than 10 hours of legal research into this.

I'm sorry to hear about your negative experience with GiveWell's hiring cycle.

I think that it's easy to under-estimate how hard it is to hire well though. For comparison, you can honestly give all the same complaints about the hiring practice of my parent company (Google).

It is slow, with many friends of mine experiencing up to a year of gap between application and eventual decision.

Later interviewers have no context on your performance on earlier parts of the application. This is actually deliberate though, since we want to get independent signal at each ... (read more)

6
Linch
1y
I can confirm that my experiences at Google is similar, as someone who both went through the application process as an applicant and was an interviewer (however, I was never on a hiring committee or explicitly responsible for hiring decisions). Including the parts about slowness, about intentionally not knowing how the candidate did in earlier stages (I believe we're technically barred from reading earlier evaluations before submitting our own), and the part about being trained to be very strongly forbidden from giving candidate feedback. Another thing I'll add: I wonder if this is just a workplace cultural difference. In almost every job I've had, being able to independently come up with an adequate solution for tightly scoped problems given minimum or no additional task-specific instructions is sort of the baseline expectation of junior workers in the "core roles" of the organizations (e.g. software engineers at a tech company, or researchers in an EA research org). Now it's often better if there's more communications and people know or are instructed to seek help when they're confused, but neither software engineering nor research are inherently very minute-to-minute collaborative activities.  I personally agree with the assessment with the OP that EA orgs should give feedback for final-round applicants, and have pushed for it before. However, I don't think Google's hiring process is particularly dysfunctional, other than maybe the slowness (I do think Google is dysfunctional in a number of other ways, just not this one).
5
Sabs
1y
It makes no sense to compare Givewell and Google. Alphabet has around 130,000 employees. Givewell has what, a  few dozen?  Obviously organizational dysfunction grows with size. Givewell should be compared to small firms and other small non-profits. Now everyone recognizes that even big companies probably could be much more efficient and Google in particular has got extremely fat and lazy off the back of the massive profitability of search - hence why it keeps buying interesting things only to fail to do anything with them and then kill them, so that should also be factored in.  Google, quite frankly, are also in a position to set terms in the hiring marketplace. They can fuck people around and frankly will always have an endless stream of quality talent wanting to go work there anyway. Givewell, by contrast, is going to be reliant on  its own reputation and that of the wider of the EA movement to attract  the people it wants. It is not in the same position and reputation matters.
[anonymous]1y34
14
3

An explicit part of interviewer training is noting that we shouldn't say anything about a candidate's performance (good or bad) to the candidate, for fear of legal repercussions.

Legal repercussions for interview feedback have been discussed on the EA Forum in the past, e.g. in the comments of this post. The consensus seems to be that it's not an issue either in theory or in practice. Certainly if your feedback is composed of lawyer-approved boilerplate, and only offered to candidates who ask for it, I think your legal risk is essentially nil. [Edit: ... (read more)

1
Richard Möhn
1y
People and organizations figure out things much harder than hiring well. Compare running one of the most used search engines on the planet with laying out a couple of assessments, a couple of people looking at them, a bit of communication by email and phone, and all run with reasonable promptness, much less than what one would expect from everyday postal services.

I worry that this post is claiming that EAs are uncommonly likely to recommend rules violations in order to achieve their goals (ie, ends justify the means). I don't think that's true, and I generally see EAs as trying very hard to be scrupulous and do right by all involved parties.

Concretely, I believe that if you went to an EA conference or a similar gathering and presented people with prisoners dilemma issues, or just lost wallets, they would behave more pro-socially than average for the country.

I think that the FTX collapse is a very salient example of EA folks committing crime (perhaps in the belief that the ends justified it?), but that doesn't mean that EA increases the probability of crime.

As someone who has recently been in the AI Safety org interview circuit, about 50% of interviews were traditional Leetcode style algorithmic/coding puzzle and 50% were more practical. This seems pretty typical compared to industry.

The EA orgs I interviewed with were very candid about their approach, and I was much less surprised by the style of interview I got than I was surprised when interviewing in industry. Anthropic, Ought, and CEA all very explicitly lay out what their interviews look like publicly. My experience was that the interviews matched the public description very well.

Thanks! Current volume is reasonable but I will totally forward some your way if I get overwhelmed.

I forgot to mention in the body, but I should thank Yonatan for putting a draft of this together and encouraging me to post it. Thanks! I've been meaning to do this for a while.

This is a touchingly earnest comment. Also is your ldap qiurui? If those words mean nothing to you, I've got the wrong guy :)

0
Charles He
2y
(I cancelled the vote on my comment so it doesn't appear in the "newsfeed", this is because it's sort of like a PM and of low interest to others.) No, I don't know what that means. But, yes I'm earnest about my comment and thanks for the appreciation.
Load more