do you have a rough guess at what % this is a deal breaker for?
It's less of "%" and more of "who will this intimidate".
Many of your top candidates will (1) currently be working somewhere, and (2) will look at many EA aligned jobs, and if many of them require a work trial then that could be a problem.
(I just hired someone who was working full time, and I assume if we required a work trial then he just wouldn't be able to do it without quitting)
Easy ways to make this better:
I recommend adding "Sam Altman" to the title, it can act as a TLDR. The current phrasing has a bit of a "click here to know more" vibe for me (like an ad) (probably unintentionally)
1.a and b.
I usually ask for feedback, and often it's something like “Idk, the vibe seemed off somehow. I can't really explain it.” Do you know what that could be?
This sounds like someone who doesn't want to actually give you feedback, my guess is they're scared of insulting you, or being liable to something legal, or something like that.
My focus wouldn't be on trying to interpret the literal words (like "what vibe") but rather making them comfortable to give you actual real feedback. This is a skill in itself which you can practice. Here's a draft to maybe...
I have thoughts on how to deal with this. My priors are this won't work if I communicate it through text (but I have no idea why). Still, seems like the friendly thing would be to write it down
My recommendation on how to read this:
Seems to me from your questions that your bottle neck is specifically finding the interview process stressful.
I think there's stuff to do about that, and it would potentially help with lots of other tradeoffs (for example, you'd happily interview in more places, get more offers, know what your alternatives are, ..)
wdyt?
TL;DR: The orgs know best if they'd rather hire you or get the amount you'd donate. You can ask them.
I'd apply sometimes, and ask if they prefer me or the next best candidate plus however much I'd donate. They have skin in the game and an incentive to answer honestly. I don't think it's a good idea to try guessing this alone
I wrote more about this here, some orgs also replied (but note this was some time ago)
(If you're asking for yourself and not theoretically - then I'd ask you if you applied to all (or some?) of the positions that you think a...
The main reason for this decision is that I failed to have (enough) direct impact.
Also, I was working on vague projects (like attempting AI Safety research), almost alone (I'm very social), with unclear progress, during covid, this was bad for my mental health.
Also, a friend invited me to join working with him, I asked if I could do a two week trial period first, everyone said yes, it was really great, and the rest is (last month's) history
Yeah, I think maybe seeing a post like this would have helped me transition earlier too, now that you say so
I might disagree with this. I know, this is controversial, but hear me out (and only then disagree-vote :P )
So,
I quit trying to have direct impact and took a zero-impact tech job instead.
I expected to have a hard time with this transition, but I found a really good fit position and I'm having a lot of fun.
I'm not sure yet where to donate extra money. Probably MIRI/LTFF/OpenPhil/RethinkPriorities.
I also find myself considering using money to try fixing things in Israel. Or maybe to run away first and take care things and people that are close to me. I admit, focusing on taking care of myself for a month was (is) nice, and I do feel like I can make a difference with E2G.
(AMA)
Thank you very much for splitting this up into sections in addition to posting the linkpost itself
Hey, is it a reasonable interpretation that EAIF is much much more interested in growing EA than in supporting existing EAs?
(I'm not saying this is a mistake)
P.S
Here are the "support existing EAs" examples I saw:
Hey, just saying explicitly that I linked to opinions of other people, not my own.
(and I'm suggesting that you reply there if you have questions for them)
AMA about Israel here:
https://www.lesswrong.com/posts/zJCKn4TSXcCXzc6fi/i-m-a-former-israeli-officer-ama
Instead, I recommend: "My prior is [something], here's why".
I'm even more against "the burden of proof for [some policy] is on X" - I mean, what does "burden of proof" even mean in the context of policy? but hold that thought.
An example that I'm against:
"The burden of proof for vaccines helping should be on people who want to vaccinate, because it's unusual to put something in your body"
I'm against it because
I agree that the question of "what priors to use here" is super important.
For example, if someone would chose priors for "we usually don't bring new more intelligent life forms to live with us, so the burden of proof is on doing so" - would that be valid?
Or if someone would say "we usually don't enforce pauses on writing new computer programs" - would THAT be valid?
imo: the question of "what priors to use" is important and not trivial. I agree with @Holly_Elmore that just assuming the priors here is skipping over some important stuff. But I disagree that "...
Hey Alex :)
1.
I don't think it's possible to write a single page that gives the right message to every user
My own attempt to solve this is to have the article MAINLY split up into sections that address different readers, which you can skip to.
2.
the second paragraph visible on that page is entirely caveat.
2.2. [edit: seems like you agree with this. TL;DR: too many caveat already] My own experience from reading EA material in general, and 80k material specifically, is that there is going to be lots of caveat which I didn't (and maybe still don't) know h...
I love that you wrote such a readable summary!
More thoughts:
Hey! This sounds super fun, I'd be happy to talk about maybe joining or maybe you have recommendations for similar orgs that I might want to look at
Specifically
Tiny suggestion:
In the "Career development: Technical tag"
add alt text that appears when my mouse is over it, similarly to what you did here:
(which looks really good and clear to me, I love it)
Update:
I love that the 80k job board team added this filter:
as well as in the "area":
And the tags (and even title!) in some of the postings:
(and maybe more things I didn't notice yet?)
This seems both well communicated (I won't take a job that I mistakenly think is rated by 80k as high impact) and it's easy to configure based on what I'm actually looking for.
I really like it, and I'll edit the post to indicate that the original criticism I had is mostly resolved.
Kudos from me to @kush_kan and the rest of the team
(I mostly agree)
When I wrote about deontology, I didn't mean "we must help all people who are stuck in their jobs". I meant "we must not hire people who will be stuck in their job while arguing that it's ok to do so for the greater good"
1)
In links to tags, like this:
https://forum.effectivealtruism.org/s/HqxvGsczdf4yLB9FG
Also add a human-readable (slug) part to the url, similarly to what you do with posts:
https://forum.effectivealtruism.org/posts/NhSBgYq55BFs7t2cA/ea-forum-feature-suggestion-thread
2)
If someone enters a link that doesn't have the human-readable part, like
https://forum.effectivealtruism.org/posts/NhSBgYq55BFs7t2cA
then redirect to a url that does have the human readable part
P.S
I really can't think of anything lower priority than this :P but thought I'd write...
I agree that work trials are a different category - and seem ok to me.
It's not an abuse of power dynamics or anything like that.
If you demand work trials (or various other things) - you will get less candidates, but it's ok, it's a tradeoff you as an employer can chose to do when nobody is dependent on you, people can just chose not to apply.
No?
P.S
I sometimes try helping orgs with hiring so I'm very interested in noticing if I'm wrong here
I consider power-dynamics safeguards that make sure, for example, that anyone can quit their job and still have a place to stay - to be deontological. You won't change my mind easily using a cost-benefit analysis, if the argument will be something like "for the greater good, it's ok to make it very hard for some people to quit, because it will save EA money that can be used to save more lives".
This is similar to how it would be hard to convince me that stealing is a good idea - even if we can use the money to buy bed nets.
I can elaborate if you don't agree...
Also, delivery is expensive and slow from the U.S. Would you react this way if I'd ask an employee from the U.S to bring Melatonin?
(I never had employees, this is hypothetical)
Everyone does that here.
I'm guessing you have some other standard.
Like maybe something about abusing power dynamics, or maybe something else
what do you think?
I don't think employers should tell employees to do illegal things, it's about both power dynamics and legality.
I would very strongly recommend that employers do not ask employees to illegally move melatonin across borders.
Obviously jaywalking is much less bad and asking your employees to jaywalk is much less bad - but I would still recommend that employers do not ask employees to jaywalk. Generally I'd say that it's much less bad to ask your employees to do an illegal thing that lots of people do anyway, but I would recommend that employees still do not a...
Hey,
It sounds to me like you're mainly focusing on
This seems to me (not that I'm an expert, at all) like there's still something missing: having the representative be actually trustworthy. I have no idea how training could accomplish that.
I know you personally and my sense is that you deeply care about this, your heart is in it, you deeply care about listening and understanding people's needs, and even if you won't know how to do something - I could communicate my needs to you and nothi...
Naive idea (not trying to resolve anything that already happened) :
Have people declare publicly if they want, for themselves, a norm where you don't say bad things about them and they don't say bad things about you.
If they say yes then you could take it into account with how you filter evidence about them.
I really liked this post, and specifically the framing of "what will a marginal donation be" (as opposed to "what's the best thing we ever did" or so).
[ramblings from my subjective view point of EA-software]
My long thoughts:
They also advertise jobs that help build career impact, and they're not against posting jobs that cause harm (and it's often/always not clear which is which). See more in this post.
They sometimes add features like marking "recommended orgs" (which I endorse!), and sometimes remove those features ( 😿 ).
See here. Relevant text:
...Recommended organisations
We’re really not sure. It seems like OpenAI, Google DeepMind, and
Nor can I speak to any of my friends or family about it, because they think the whole thing is ridiculous, and I’ve put myself in something of a boy who cried wolf situation by getting myself worked up over a whole host of worst-case scenarios over the years.
This seems important to me, having people to talk to.
How about sharing that you have uncertainty and aren't sure how to think about it, or something like that? Seems different from "hey everyone, we're definitely going to die this time" and also seems true to your current state (as I understand it from this post)
Do you [or anyone else] have an opinion about my project for free career coaching for EA developers?
I have mixed feelings about it myself
Whoever downvoted this, I'd really prefer if you tell me why
You can do it anonymously:
https://docs.google.com/forms/d/e/1FAIpQLSca6NOTbFMU9BBQBYHecUfjPsxhGbzzlFO5BNNR1AIXZjpvcw/viewform
What about social norms, like "EA should encourage people to take care of their mental health even if it means they have less short-term impact"?
Hey Rakefet :)
My short thoughts on this:
TL;DR: I don't like talking about "burden of proof"
I prefer talking about "priors".
Seems like you ( @Greg_Colbourn ) have priors that AI labs will cause damage, and I'd assume @Benjamin Hilton would agree with that?
I also guess you both have priors that ~random (average) capabilities research will be net negative?
If so, I suggest we should ask if the AI lab (or the specific capabilities research) has overcome that prior somehow.
wdyt?
I have a crazy opinion that everyone's invited to disagree with: Often long comments on the EA forum would better be split up into a few smaller comments, so that others could reply separately, agree/disagree separately, or (as you point out) emoji-react to separately.
This is a forum culture thing, right now it would be weird to respond with many small comments, but it would be better to make it not-weird
What do you think?
For transparency: I'd personally encourage 80k to be more opinionated here, I think you're well positioned and have relevant abilities and respect and critical-mass-of-engineers-and-orgs. Or at least as a fallback (if you're not confident in being opinionated) - I think you're well positioned to make a high quality discussion about it, but that's a long story and maybe off topic.
TL;DR: "which lab" seems important, no?
You wrote:
Don’t work in certain positions unless you feel awesome about the lab being a force for good.
First of all I agree, thumbs up from me! 🙌
But you also wrote:
Recommended organisations
We’re really not sure. It seems like OpenAI, Google DeepMind, and Anthropic are currently taking existential risk more seriously than other labs.
I assume you don't recommend people go work for whatever lab "currently [seems like they're] taking existential risk more seriously than other labs" ?
Do you have further ...
I'd expect clicking on my profile picture to take me to my profile (currently the click doesn't do anything) (but it does have a pretty animation)
Linking to Zvi's review of the podcast:
https://thezvi.wordpress.com/2024/04/15/monthly-roundup-17-april-2024/
Search for:
It's a negative review, but opinions are Zvi's, I didn't hear the podcast myself.