All of Yonatan Cale's Comments + Replies

Linking to Zvi's review of the podcast:

https://thezvi.wordpress.com/2024/04/15/monthly-roundup-17-april-2024/

Search for:

Will MaCaskill went on the Sam Harris podcast

 

It's a negative review, but opinions are Zvi's, I didn't hear the podcast myself.

do you have a rough guess at what % this is a deal breaker for?

It's less of "%" and more of "who will this intimidate".

Many of your top candidates will (1) currently be working somewhere, and (2) will look at many EA aligned jobs, and if many of them require a work trial then that could be a problem.

(I just hired someone who was working full time, and I assume if we required a work trial then he just wouldn't be able to do it without quitting)

 

Easy ways to make this better:

  1. If you have flexibility (for example, whether the work trial is local or remote
... (read more)

I recommend adding "Sam Altman" to the title, it can act as a TLDR. The current phrasing has a bit of a "click here to know more" vibe for me (like an ad) (probably unintentionally)

3
Will Howard
1mo
Personally I think the other members are actually the bigger news here, seeing as Sam being added back seemed like a foregone conclusion (or at least, the default outcome, and him not being added back would have been news). But anyway, my goal was just to link to the post without editorialising too much so that people can discuss it on the forum. For this I think a policy of copying the exact title from the article is good in general.

1.a and b.

I usually ask for feedback, and often it's something like “Idk, the vibe seemed off somehow. I can't really explain it.” Do you know what that could be?

This sounds like someone who doesn't want to actually give you feedback, my guess is they're scared of insulting you, or being liable to something legal, or something like that.

My focus wouldn't be on trying to interpret the literal words (like "what vibe") but rather making them comfortable to give you actual real feedback. This is a skill in itself which you can practice. Here's a draft to maybe... (read more)

5
Dawn Drescher
2mo
Oh, interesting… I'm autistic and I've heard that autistic people give off subtly weird “uncanny valley”–type vibes even if they mask well. So I mostly just assume it's that. Close friends of mine who surely felt perfectly free to tell me anything were also at a loss to describe it. They said the vibes were less when I made a ponytail rather than had open hair, but they couldn't describe it. (Once I transition more, I hope people will just attribute the vibes to my probably-unfortunately-slightly-imperfect femininity and not worry about it. ^.^ I just need to plant enough weirdness lightning rods. xD)  But he was US-based at the time, and I've heard employers in the US are much more careful with giving feedback than around here, so maybe it was just guardedness in that case. I like your template! I remember another series of interviews where I easily figured out what the problems were (unless they were pretenses). I think I'm quite attuned (by dint of social anxiety) to subtle indications of disappointment and such. When I first mentioned earning to give in an interview, I noticed a certain hesitancy and found out that it's because the person was looking for someone who has an intrinsic motivation for building hardware for supply chain optimization rather than someone who does it for the money. But in other cases I'm clueless, so the template can come into action! Oh yes, I love this! I think I've done this in virtually every interview simply because I actually didn't know something. One interviewer even asked me whether I know the so-and-so design pattern. I asked what that is, and then concluded that I had never heard of it. Good call too, because that thing turned out to be ungoogleable. Idk whether he made it up or whether it was an invention of his CS professor, but being transparent about such things has served me well. :-D I think for me it's mostly about what the other people in the room will think about me, not about consequences for me. I'm also afraid

I would be interested in something like this existing for Israel

I have thoughts on how to deal with this. My priors are this won't work if I communicate it through text (but I have no idea why). Still, seems like the friendly thing would be to write it down

 

My recommendation on how to read this:

  1. If this advice fits you, it should read as "ah obviously, how didn't I think of that?". If it reads as "this is annoying, I guess I'll do it, okay...." - then something doesn't fit you well, I missed some preference of yours. Please don't make me a source of annoying social pressure
  2. Again, for some reason this works better w
... (read more)
3
Richard_Leyba_Tejada
2mo
"The goal of interviews is not to pass them (that's the wrong goal, I claim). The goals I recommend are: 1. Reducing uncertainty regarding what places will accept you. (so you should get many rejections, it's by-design, otherwise you're not searching well)" I get very anxious the closer I am to interview day. I am researching how to get really good and started doing mock interviews to practice.    Shifting to reducing uncertainty/research vs passing seems helpful.
4
Dawn Drescher
5mo
1.a. and b.: Reframing it like that sounds nice! :-D Seems like you solved your problem by getting shoes that are so cool, you never want to take them off! (I so wouldn't have expected someone to have a problem with that though…) I usually ask for feedback, and often it's something like “Idk, the vibe seemed off somehow. I can't really explain it.” Do you know what that could be? 2. I'm super noncompetitive… When it comes to EA jobs, I find it reassuring that I'm probably not good at making a good first impression because it reduces the risk that I replace someone better than me. But in non-EA jobs I'm also afraid that I might not live up to some expectations in the first several weeks when I'm still new to everything. 3. Haha! Excellent! I should do that more. ^.^ 4. You mean as positive reinforcement? I could meet with a friend or go climbing. :-3 5. Aw, yes, spot on. I spent a significant fraction of my time over the course of 3–4 months practicing for Google interviews, and then never dared to apply anyway (well, one recruiter stood me up and I didn't try again with another). Some of the riddles in Cracking the Coding Interview were so hard for me that I could never solve them in 30 minutes, and that scared me even more. Maybe I should practice minimally next time to avoid that. Thank you so much for all the tips! I think written communication works perfectly for me. I don't actually remember your voice well enough to imagine you speaking the text, but I think you've gotten everything across perfectly? :-D I'll only pounce on amazing opportunities for now and continue GoodX fulltime, but in the median future I'll double down on the interviewing later in 2024 when our funds run out fully. Then I'll let you know how it went! (Or I hope I'll remember to!) For now I have a bunch more entrepreneurial ideas that I want to have at least tried. :-3

Seems to me from your questions that your bottle neck is specifically finding the interview process stressful.

I think there's stuff to do about that, and it would potentially help with lots of other tradeoffs (for example, you'd happily interview in more places, get more offers, know what your alternatives are, ..)

wdyt?

2
Dawn Drescher
5mo
That makes a lot of sense! I've been working on that, and maybe my therapist can help me too. It's gotten better over the years, but I used to feel intense shame over mistakes I made or might've made for years after such situations, so that I'm still afraid of my inner critic. Plus I feel rather sick on interview days, which is probably the stress.

TL;DR: The orgs know best if they'd rather hire you or get the amount you'd donate. You can ask them.

I'd apply sometimes, and ask if they prefer me or the next best candidate plus however much I'd donate. They have skin in the game and an incentive to answer honestly. I don't think it's a good idea to try guessing this alone

 

I wrote more about this here, some orgs also replied (but note this was some time ago)

 

(If you're asking for yourself and not theoretically - then I'd ask you if you applied to all (or some?) of the positions that you think a... (read more)

4
Dawn Drescher
5mo
Thanks! Yeah, I've included that in the application form in one or two cases in the hope it'll save time (well, not only time – I find interview processes super stressful, so if I'm going to get rejected or decline, I'd like (emotionally) for that to happen as early as possible) but I suppose that's too early. I'll ask about it later like you do. I haven't gotten so far yet with any impact-focused org.

The main reason for this decision is that I failed to have (enough) direct impact.

 

Also, I was working on vague projects (like attempting AI Safety research), almost alone (I'm very social), with unclear progress, during covid, this was bad for my mental health.

 

Also, a friend invited me to join working with him, I asked if I could do a two week trial period first, everyone said yes, it was really great, and the rest is (last month's) history

Yeah, I think maybe seeing a post like this would have helped me transition earlier too, now that you say so

I might disagree with this. I know, this is controversial, but hear me out (and only then disagree-vote :P )

 

So,

  1. Some jobs are 1000x+ more effective than the "typical" job. Like charities
  2. So picking one of the super-impactful ones matters, compared to the rest. Like charities
  3. But picking something that is 1x or 3x or 9x doesn't really matter, compared to the 1000x option. (like charities)
  4. Sometimes people go for a 9x job, and they sacrifice things like "having fun" or "making money" or "learning" (or something else that is very important to them). This is
... (read more)
2
Dawn Drescher
5mo
Haha! Where exactly do you disagree with me? My mind autocompleted that you'd proffer this objection:  If you work for a 9x job, chances are that you're in an environment where most employees are there for altruistic reasons but prioritize differently so that they believe that the job is one of the best things you can do. Then you'll be constantly exposed to social pressure to accept a lower salary, less time off, more overtime, etc., which will cut into the donations, risks burnout, and reduces opportunities to learn new skills. What do you think? I'm a bit worried about this too and would avoid 9x jobs where I suspect this could happen. But having a bunch of altruistic colleagues sounds great otherwise. :-D I think I will need to aim for something a bit above background economic growth levels of good to pacify my S1 in the long run. ^.^

I quit trying to have direct impact and took a zero-impact tech job instead.

I expected to have a hard time with this transition, but I found a really good fit position and I'm having a lot of fun.

I'm not sure yet where to donate extra money. Probably MIRI/LTFF/OpenPhil/RethinkPriorities.

I also find myself considering using money to try fixing things in Israel. Or maybe to run away first and take care things and people that are close to me. I admit, focusing on taking care of myself for a month was (is) nice, and I do feel like I can make a difference with E2G.

(AMA)

-6
Peter Wildeford
5mo
2
jknowak
5mo
What were the main reasons for this decision? Was this motivated by how much you could earn in a typical zero-impact tech job? I mean - would you still "quit trying to have direct impact" if your zero-impact tech job wouldn't leave you with much extra money to donate?
2
Linch
5mo
Congrats Yonatan! Good luck with your work and I hope you stay safe out there!
2
ChrisSmith
5mo
Thanks for sharing! I occasionally worry that I'd struggle emotionally to go back to E2G/most of my impact being via donations, so this is a helpful anecdatum.
5
Dawn Drescher
5mo
Yeah, ETG seems really strong to me at the moment! What do you think is a good threshold for the average EA in terms of annual USD donations that they can make at which they should seriously consider ETG? 
7
Ozzie Gooen
5mo
Congrats Yonatan! Good luck deciding where to donate! Seems like there are a lot of good options now. 

Thank you very much for splitting this up into sections in addition to posting the linkpost itself

3
Tristan Williams
6mo
Anytime :) I didn't do much, but glad to know it was helpful because I was debating whether to continue trying to organize for future stuff

Hey, is it a reasonable interpretation that EAIF is much much more interested in growing EA than in supporting existing EAs?

(I'm not saying this is a mistake)

 

P.S

Here are the "support existing EAs" examples I saw:

  • "[funding a] PhD student to attend a one-month program" [$100k tier] - this seems like a very different grant than the other examples, I'm even surprised to see this under EAIF rather than LTFF
  • "A shared workspace for the EA community" [$5M tier] - totally supports existing EAs
  • "an open-source Swapcard alternative" [$10M tier] - I'm surprised this isn't under CEA

Hey, just saying explicitly that I linked to opinions of other people, not my own.

(and I'm suggesting that you reply there if you have questions for them)

2
NickLaing
6mo
Thanks, I'm interested that you think occupation is a real possibility, with another leadership to take over control - that would mean a battle and complete takeover I suppose, its hard to imagine Hamas surrendering easily...

AMA about Israel here:
https://www.lesswrong.com/posts/zJCKn4TSXcCXzc6fi/i-m-a-former-israeli-officer-ama

Against "the burden of proof is on X"

Instead, I recommend: "My prior is [something], here's why".

I'm even more against "the burden of proof for [some policy] is on X" - I mean, what does "burden of proof" even mean in the context of policy? but hold that thought.

 

An example that I'm against:

"The burden of proof for vaccines helping should be on people who want to vaccinate, because it's unusual to put something in your body"

I'm against it because 

  1. It implicitly assumes that vaccines should be judged as part of the group "putting something in your
... (read more)
1
titotal
7mo
So, I'll give two more examples of how burden of proof gets used typically: 1. You claim that you just saw a unicorn ride past. I say that the burden of proof is on you to prove it, as unicorns do not exist (as far as we know).  2. As prime minister, you try and combat obesity by taxing people in proportion to their weight. I say that the burden of proof is on you to prove that such a policy would do more good than harm. I think in both these cases, the statements made are quite reasonable. Let me try to translate the objections into your language: 1. my prior of you seeing a unicorn is extremely low, because unicorns do not exist (as far as we know) 2. My prior of this policy being a good idea is low, because most potential interventions are not helpful.  These are fine, but I'm not sure I prefer either of these. It seems like the other party can just say "well my priors are high, so I guess both our beliefs are equally valid".  I think "burden of proof" translates to "you should provide a lot of proof for your position in order for me or anyone else to believe you". It's a statement of what peoples priors should be. 
1
Azad Ellafi
7mo
I've always viewed burden of proof as a dialectical tool. To say one has the burden proof is to say that if they meet the following set of necessary and jointly sufficient conditions: 1. You've made a claim 2. You're attempting to convince another of the claim. They have the obligation in the discussion to provide justification for the claim. If (1) isn't the case, then of course you don't have any burden to provide justification. If (2) isn't the case (Say, everyone already agrees with the claim or someone just wants your opinion on something) it's not clear to me you have some obligation to provide justification either. On this account, it's not like burden of proof talk favors a side. And I'm not sure it implicitly assumes anything or is a conversation stopper. So maybe we can "keep burden of proof talk" by using this construal while also focusing more on explicit discussion of priors. Idk, just a thought I had while reading this.

I agree that the question of "what priors to use here" is super important.

For example, if someone would chose priors for "we usually don't bring new more intelligent life forms to live with us, so the burden of proof is on doing so" - would that be valid?

Or if someone would say "we usually don't enforce pauses on writing new computer programs" - would THAT be valid?

imo: the question of "what priors to use" is important and not trivial. I agree with @Holly_Elmore that just assuming the priors here is skipping over some important stuff. But I disagree that "... (read more)

6
Holly_Elmore
7mo
*As far as my essay (not posted yet) was concerned, she could have stopped there, because this is our crux.
5
Davidmanheim
7mo
In a debate, which is what was supposed to be happening, the point is to make claims that either support or refute the central claim. That's what Holly was pointing out - this is a fundamental requirement for accepting Nora's position. (I don't think that this is the only crux - "AI Safety is gonna be easy" and "AI is fully understandable" are two far larger cruxes, but they largely depend on this first one.)

Hey Alex :)

1.

I don't think it's possible to write a single page that gives the right message to every user

My own attempt to solve this is to have the article MAINLY split up into sections that address different readers, which you can skip to.

 

2.

the second paragraph visible on that page is entirely caveat.

2.2. [edit: seems like you agree with this. TL;DR: too many caveat already] My own experience from reading EA material in general, and 80k material specifically, is that there is going to be lots of caveat which I didn't (and maybe still don't) know h... (read more)

2
alex lawsen (previously alexrjl)
7mo
I don't think it's worth me going back and forth on specific details, especially as I'm not on the web team (or even still at 80k), but these proposals are different to the first thing you suggested. Without taking a position on whether this structure would overall be an improvement, it's obviously not the case that just having different sections for different possible users ensures that everyone gets the advice they need. For what it's worth, one of the main motivations for this being an after-hours episode, which was promoted on the EA forum and my twitter, is that I think the mistakes are much more common among people who read a lot of EA content and interact with a lot of EAs (which is a small fraction of the 80k website readership). The hope is that people who're more likely than a typical reader to need the advice are the people most likely to come across it, so we don't have to rely purely on self-selection.

voted for calendar sync, may the world be sane again!!

I love that you wrote such a readable summary!

 

More thoughts:

  1. Regarding "Taking 80,000 Hours’ rankings too seriously" (and specifically thinking that you MUST work on AI Safety) - maybe it's worth writing something about that in the website in the section about AI Safety?
    1. (I think I share 80k's views both on the importance of AI Safety and also that not everyone should go do that)
  2. I love that you talk so much about personal fit. 
    1. I think this is something that historically 80k says a lot but readers don't internalize. Not sure how to fix it but I'm h
... (read more)
4
alex lawsen (previously alexrjl)
7mo
Responding here to parts of the third point not covered by "yep, not everyone needs identical advice, writing for a big audience is hard" (same caveats as the other reply): No, I don't think it's always bad to switch a lot. The scenario you described, where the person in question gets a 1 OOM impact bump per job switch and then also happens to end up in a role with excellent personal fit is obviously good, though I'm not sure there's any scenario discussed in the podcast that wouldn't look good if you made assumptions that generous about it. The thing I describe as being my policy in the episode isn't a hypothetical example, it's an actual policy (including the fact that the bounds are soft in my case, i.e. I don't actively look before the time commitment is through, and have a strong default but not an unbreakable rule to turn down other opportunities in the meantime. I think that taking a 20% time hit to look for other things would have been a huge mistake in my case. The OP job had nothing to do with proactive exploration, as I wasn't looking at the time (though having got through part of the process, I brought the period of exploration I'd planned for winter 2023 forward by a few months, so by the time I got the OP offer I'd already done some assessment of whether other things might be competitive). Not 100% sure I followed this but if what you're saying is "don't just sit and think on your own when you decide to do the career exploration thing, get advice from others (including 80k)", then yes, I think that's excellent advice. In making my own decision I, among other things: * Spoke to my partner, some close friends, my manager at 80k (Michelle), and my (potential) new manager at Open Phil (Luke) * Wrote and shared a decision doc * Had 'advising call' style conversations with three people (to whom I'm extremely grateful), who I asked because I thought they'd make good advisors, and I didn't want to speak to one of 80k's actual advisors because that's a r
7
alex lawsen (previously alexrjl)
7mo
Thanks for asking these! Quick reaction to the first couple of questions, I'll get to the rest later if I can (personal opinions, I haven't worked on the web team, no longer at 80k etc. etc.): I don't think it's possible to write a single page that gives the right message to every user - having looked at the pressing problems page - the second paragraph visible on that page is entirely caveat. It also links to an FAQ, where multiple parts of the FAQ directly talk about whether people should just take the rankings as given. When you then click through to the AI problem profile, the part of the summary box that talks about whether people should work on AI reads as follows: Frankly, for my taste, several parts of the website already contain more caveats than I would use about ways the advice is uncertain and/or could be wrong, and I think moves in this direction could just as easily be patronising as helpful.

Hey! This sounds super fun, I'd be happy to talk about maybe joining or maybe you have recommendations for similar orgs that I might want to look at

 

Specifically

  • marketplaces are close to my heart. I live in Israel, and when we got reasonable Amazon deliveries (within "only" 1-2 weeks), this improved things here so much imo. Also we have a lame taxi app which is still so-much-better than nothing (it does have ratings!). Each such thing helps so much (it's a topic I could talk a lot about) - and we're a 1st world country! I can't even imagine what's goi
... (read more)
1
Luke Eure
7mo
Hi Yonatan, thank you so much for the super kind words! I had not thought at all about posting on the 80k hours board - I will talk with our HR person about if she thinks that makes sense. I'd love to talk with you - I'll DM you

Tiny suggestion: 

 

In the "Career development: Technical tag"

add alt text that appears when my mouse is over it, similarly to what you did here:

(which looks really good and clear to me, I love it)

Update:

I love that the 80k job board team added this filter:

as well as in the "area":

And the tags (and even title!) in some of the postings:

(and maybe more things I didn't notice yet?)

 

This seems both well communicated (I won't take a job that I mistakenly think is rated by 80k as high impact) and it's easy to configure based on what I'm actually looking for.

I really like it, and I'll edit the post to indicate that the original criticism I had is mostly resolved.

Kudos from me to @kush_kan and the rest of the team

5
Yonatan Cale
7mo
Tiny suggestion:    In the "Career development: Technical tag" add alt text that appears when my mouse is over it, similarly to what you did here: (which looks really good and clear to me, I love it)

(I mostly agree)

When I wrote about deontology, I didn't mean "we must help all people who are stuck in their jobs". I meant "we must not hire people who will be stuck in their job while arguing that it's ok to do so for the greater good"

1)

In links to tags, like this:

https://forum.effectivealtruism.org/s/HqxvGsczdf4yLB9FG

Also add a human-readable (slug) part to the url, similarly to what you do with posts:

https://forum.effectivealtruism.org/posts/NhSBgYq55BFs7t2cA/ea-forum-feature-suggestion-thread

 

2)

If someone enters a link that doesn't have the human-readable part, like 

https://forum.effectivealtruism.org/posts/NhSBgYq55BFs7t2cA

then redirect to a url that does have the human readable part

 

P.S

I really can't think of anything lower priority than this :P but thought I'd write... (read more)

4
JP Addison
7mo
💙

(vs how often do you direct someone away from something else to work on AI Safety)

I agree that work trials are a different category - and seem ok to me.

It's not an abuse of power dynamics or anything like that.

If you demand work trials (or various other things) - you will get less candidates, but it's ok, it's a tradeoff you as an employer can chose to do when nobody is dependent on you, people can just chose not to apply.

No?

@Rockwell 

 

P.S

I sometimes try helping orgs with hiring so I'm very interested in noticing if I'm wrong here

4
Joseph Lemien
7mo
TLDR: I think you are right, it is generally fine. I do a lot of thinking about hiring, so I'll chime in here. I think that work trials (work sample tests that are used to evaluate a candidate's skills during a hiring process) have plenty of potential for abuse, but generally work fine the way that EA orgs tend to do them. Off the top of my head, the main aspects that I would look at to make a judgement if it is fine or not (setting aside fairness/justice/accessibility aspects, and just focusing on power/exploitation dynamics): * Time required. A lot of useful skills don't need 4 or 5 hours to be evaluated. My hypothesis (currently untested) is that most work trials could be 45 minutes or less. * Payment given. This is pretty clear: giving someone money in exchange for work seems more reasonable and less exploitative than asking someone do do some work for free. * Whether it is piece of real work. The worst version would just be to find a discrete chunk of work and have a job applicant do that. When combined with not being paid, this is the most obviously exploitative thing, because a company can literally just use candidates as free labor for any discrete  tasks. * Respect/communication. This is a bit more fuzzy, but the mental model I have of the bad version of this is a candidate submitting a piece of work into the void and never hearing anything back. The best version of this involves feedback on what the candidate did well and what went poorly.

I consider power-dynamics safeguards that make sure, for example, that anyone can quit their job and still have a place to stay - to be deontological. You won't change my mind easily using a cost-benefit analysis, if the argument will be something like "for the greater good, it's ok to make it very hard for some people to quit, because it will save EA money that can be used to save more lives".

This is similar to how it would be hard to convince me that stealing is a good idea - even if we can use the money to buy bed nets.

I can elaborate if you don't agree... (read more)

1
Ebenezer Dukakis
7mo
There are millions of people around the world who live paycheck to paycheck, and run the risk of becoming homeless if they quit their jobs. We don't have the resources to help all of those people, and I'm not immediately seeing how deontology helps us figure out how to allocate our limited resources between this and various other obligations we may have. [Edit: maybe this section was obtuse on my part -- see Yonatan's reply below.] I think it is really valuable for people in EA to feel comfortable pushing back against their boss. (I see strong consequentialist arguments for this. Those arguments are why I will focus on people in EA, rather than non-EAs living paycheck to paycheck, for the rest of this comment.) I think there are ways to achieve this cost-effectively. For example: * When possible, have employee housing arrangements made directly with a landlord or similar person, rather than routing through someone they have a working relationship with. * Agree in advance that any employee who lives for free in employer-provided housing gets to continue living there for, say, 3 months if they quit/get fired. * Build things like Basefund to the point where no EA thinks it is very hard to quit their job. (For example, a hypothetical Basefund+ could guarantee that EA employees who quit/get fired always receive a generous severance package. This idea might seem costly at first, but because the money is going to an EA instead of a landlord, it is much more likely to e.g. be donated to an effective charity.) * Encourage EAs to live with non-EAs when all else is equal.

Hey! Unrelated to the post, but if this is still an open problem and you're a software developer, consider messaging me (here's my cv for my experience). I don't promise magic pills, but who knows

[This comment is no longer endorsed by its author]Reply

1. In Israel, I'm not allowed to buy melatonin without a prescription. 

Also, delivery is expensive and slow from the U.S. Would you react this way if I'd ask an employee from the U.S to bring Melatonin?

(I never had employees, this is hypothetical)

 

 

2. How about crossing the road when there is a red light and no cars around, when going to eat?

Everyone does that here.

 

 

My point it: I'm guessing you don't care about what is strictly legal or not

I'm guessing you have some other standard. 

Like maybe something about abusing power dynamics, or maybe something else

what do you think?

 

 

 

I don't think employers should tell employees to do illegal things, it's about both power dynamics and legality.

I would very strongly recommend that employers do not ask employees to illegally move melatonin across borders.

Obviously jaywalking is much less bad and asking your employees to jaywalk is much less bad - but I would still recommend that employers do not ask employees to jaywalk. Generally I'd say that it's much less bad to ask your employees to do an illegal thing that lots of people do anyway, but I would recommend that employees still do not a... (read more)

I expect the tradeoff here to work better the easier it is to apply

Hey,

It sounds to me like you're mainly focusing on

  1. Nominating a representative (who gets training)
  2. Filing a complaint

 

This seems to me (not that I'm an expert, at all) like there's still something missing: having the representative be actually trustworthy. I have no idea how training could accomplish that.

I know you personally and my sense is that you deeply care about this, your heart is in it, you deeply care about listening and understanding people's needs, and even if you won't know how to do something - I could communicate my needs to you and nothi... (read more)

Naive idea (not trying to resolve anything that already happened) :

Have people declare publicly if they want, for themselves, a norm where you don't say bad things about them and they don't say bad things about you.

If they say yes then you could take it into account with how you filter evidence about them.

Thanks for making me (us?) a bit less confused about legal things!

Writing such a post for EAIF (even a 5x shorter version) would help me get an idea on what's the bar for a community project to be ~worthwhile, and especially to easily say "no, this isn't worthwhile".

I'm saying this because even this LTFF post updated my opinion about that.

I really liked this post, and specifically the framing of "what will a marginal donation be" (as opposed to "what's the best thing we ever did" or so). 

 

[ramblings from my subjective view point of EA-software]

  1. It reminds me of how developers consider joining an EA org, and think "well, seems like all your stuff is already built, no?". I think writing about marginal things the org wants to build and needs help with - would go a long way for many job posts
  2. This somewhat updated me towards "it's a bad idea to fund me, my work isn't as important as all this" and also towards "maybe I better do some E2G so you can fund more things like this"

My long thoughts:

1. 80k don't claim to only advertise impactful jobs

They also advertise jobs that help build career impact, and they're not against posting jobs that cause harm (and it's often/always not clear which is which). See more in this post.

They sometimes add features like marking "recommended orgs" (which I endorse!), and sometimes remove those features ( 😿 ).

2. 80k's career guide about working at AI labs doesn't dive into "which lab"

See here. Relevant text:

Recommended organisations

We’re really not sure. It seems like OpenAI, Google DeepMind, and

... (read more)
3
Guy Raveh
8mo
And there's always the other option that I (unpopularly) believe in - that better publicly available AI capabilities are necessary for meaningful safety research, thus AI labs have contributed positively to the field.

Nor can I speak to any of my friends or family about it, because they think the whole thing is ridiculous, and I’ve put myself in something of a boy who cried wolf situation by getting myself worked up over a whole host of worst-case scenarios over the years.

This seems important to me, having people to talk to.

How about sharing that you have uncertainty and aren't sure how to think about it, or something like that? Seems different from "hey everyone, we're definitely going to die this time" and also seems true to your current state (as I understand it from this post)

Do you [or anyone else] have an opinion about my project for free career coaching for EA developers?

 

I have mixed feelings about it myself

1
Devon Fritz
8mo
To me, based on what you said, you have provided a lot of value to many people at relatively low cost to yourself. I have the impression that the time was quite counterfactual given you didn't seem to have many burning projects at the time. So, seems pretty good to me on the face of it although for every given detail you know way more than I do!

What about social norms, like "EA should encourage people to take care of their mental health even if it means they have less short-term impact"?

2
Ozzie Gooen
9mo
Good question. First, I have a different issue with that phrase, as it's not clear what "EA" is. To me, EA doesn't seem like an agent. You can say, "....CEA should" or "...OP should". Normally, I prefer one says "I think X should". There are some contexts, specifically small ones (talking to a few people, it's clearly conversational) where saying, "X should do Y" clearly means "I feel like X should do Y, but I'm not sure". And there are some contexts where it means "I'm extremely confident X should do Y". For example, there's a big difference between saying "X should do Y" to a small group of friends, when discussing uncertain claims, and writing a mass-market book titled "X should do Y". 

Hey Rakefet :)

 

My short thoughts on this:

  • Makes sense, I think I agree
  • I don't think my opinion is affected much by the study
    • Specifically, my opinions on how to be successful
    • And also my opinions on whether to give "to everyone, all the time, in any way"
    • I don't think I'd change my mind about this even if you'd present me with a study showing the opposite result (For example: I wouldn't start giving to everyone all the time in any way, I wouldn't stop giving completely)
    • Also, I'm not strongly optimizing for "being successful" anyway (and I don't think almo
... (read more)

TL;DR: I don't like talking about "burden of proof"

 

I prefer talking about "priors".

Seems like you ( @Greg_Colbourn ) have priors that AI labs will cause damage, and I'd assume @Benjamin Hilton would agree with that?

I also guess you both have priors that ~random (average) capabilities research will be net negative?

If so, I suggest we should ask if the AI lab (or the specific capabilities research) has overcome that prior somehow.

wdyt?

2
Yonatan Cale
9mo
Whoever downvoted this, I'd really prefer if you tell me why You can do it anonymously: https://docs.google.com/forms/d/e/1FAIpQLSca6NOTbFMU9BBQBYHecUfjPsxhGbzzlFO5BNNR1AIXZjpvcw/viewform
4
Greg_Colbourn
9mo
I don't think any of the big AI labs have overcome that prior, but I also have the prior that their safety plans don't even make sense theoretically - hence the "burden of proof" is on them to show that it is possible to align the kind of AI they are building. Another thing pointing in the opposite direction.

I have a crazy opinion that everyone's invited to disagree with: Often long comments on the EA forum would better be split up into a few smaller comments, so that others could reply separately, agree/disagree separately, or (as you point out) emoji-react to separately. 

This is a forum culture thing, right now it would be weird to respond with many small comments, but it would be better to make it not-weird

What do you think?

2
Nathan Young
9mo
I strongly agree, but people don't really like this. There are, I sense two modes of thinking about comments: * There to start discussion * There to find concensus Long/interesting comments are good for the former. Short, clear comments are good for the latter. 

For transparency: I'd personally encourage 80k to be more opinionated here, I think you're well positioned and have relevant abilities and respect and critical-mass-of-engineers-and-orgs. Or at least as a fallback (if you're not confident in being opinionated) - I think you're well positioned to make a high quality discussion about it, but that's a long story and maybe off topic.

3
Benjamin Hilton
9mo
I don't currently have a confident view on this beyond "We’re really not sure. It seems like OpenAI, Google DeepMind, and Anthropic are currently taking existential risk more seriously than other labs." But I agree that if we could reach a confident position here (or even just a confident list of considerations), that would be useful for people — so thanks, this is a helpful suggestion!

TL;DR: "which lab" seems important, no?


You wrote:

Don’t work in certain positions unless you feel awesome about the lab being a force for good.

First of all I agree, thumbs up from me! 🙌

 

But you also wrote:

Recommended organisations

We’re really not sure. It seems like OpenAI, Google DeepMind, and Anthropic are currently taking existential risk more seriously than other labs.

 

I assume you don't recommend people go work for whatever lab "currently [seems like they're] taking existential risk more seriously than other labs" ?

 

Do you have further ... (read more)

6
Yonatan Cale
9mo
For transparency: I'd personally encourage 80k to be more opinionated here, I think you're well positioned and have relevant abilities and respect and critical-mass-of-engineers-and-orgs. Or at least as a fallback (if you're not confident in being opinionated) - I think you're well positioned to make a high quality discussion about it, but that's a long story and maybe off topic.

I'd expect clicking on my profile picture to take me to my profile (currently the click doesn't do anything) (but it does have a pretty animation)

4
JP Addison
9mo
I just added this to a recent related improvement. Should be fixed when that Pull Request gets merged.
Load more