All of Yonatan Cale's Comments + Replies

Yes,

I could use help understanding the demand for

  1. Similar features but less bugs
  2. Focusing on CEA's use case (making more high quality connections, right?)

Can you help me with this @Eli Rose🔸 ?

2
Eli Rose🔸
Cool! Re how to build it, I'd just talk to CEA here or maybe EAG goers, don't think I have any insight to add.

Hey Ben :)

  1. This analysis updated+surprised me, thx
  2. I don't currently want to E2G, but if you (or others reading this comment) want to hear about cto/founder positions that I'm saying "no" to (but would probably want to hear about if I was going to do E2G as a cto/founder), lmk [[ + disclaimers about "won't always be relevant" and so on ]]
6
Jamie_Harris
On 2., this is a kind offer! Is there some way you'd be able and comfortable sharing some of these with participants in CEA's ongoing career bootcamps? We have a fair few software engineers, current or former C-suite types, etc.  (Or is there some way we could connect individuals with you?) Thanks!

If you know anyone who drops out of a PhD, consider suggesting they apply[1] for Effective Dropouts! You're invited too!

  1. ^

    As an organization that strives to have "the values of Effective Altruism plus the inverse of the values of universities", we don't actually have an application process

Hey, the post of mine that you linked is an attempt to simplify this question a lot, here's the TL;DR in my own words:

Instead of considering whether more EAs should E2G or not: apply to a few EA orgs you like, and if any of them accept you, ask them if they'd rather hire you or get some $ amount that is vaguely what you'd donate if you'd E2G. 

 

I think, for various reasons, this would be a better decision-making process than considering what more EAs should do.

3
Vasco Grilo🔸
Hi Yonatan, Would it be better to assume organisations are indifferent between having a person work for them, and receiving what they would pay the person? I think so. It corresponds to the organisations' revealed preferences, and I believe these are more reliable than their stated preferences. Organisations wanting to maximise their own impact (at the expense of global impact) have an incentive to overestimate the money they would have to receive to be happy to let the person go because they know the person could then donate to many other organisations.

On getting a software job in in the age of AI tools, I tried to collect some thoughts here.

Similar'ish discussions about Anthropic keep coming up, see a recent one here. (I think it would be better to discuss these things in a central place since parts of the conversation repeat themselves. I don't think a central place currently exists, but maybe you'd prefer to merge over opening a new one)

Perfect Draft Amnesty post!

 

Here are my Draft Amnesty thoughts:

  1. I model many game developers as wanting their users to have fun, probably many of them are gamers. In other words, I don't think it's enough to TRY to make a fun game, we need something more
  2. Naively, if you don't optimize for "people get addicted to your game" then probably people will get addicted to another game? Unless maybe you have a clever idea for some tradeoff EA could do that EA couldn't do?
  3. I don't think you can assume "we can spend X on making a game and make more than X, which we could spend on something else" (I'm not sure you did that, excuse my draft comment if not)

Oh, looking now - my calendar sync is on but none of the Swapcard events appear in my Google Calendar (not meetups, not 1-on-1s) (I synced to Google Calendar before scheduling anything)

Do you have a way to debug it? Otherwise I'll disconnect and re-connect

1
Ivan Burduk
Mine weirdly only shows on my mobile and not on my PC. Something to do with it being on a different calendar. Maybe that's what's happening for you?

Updates [Feb 2025] :

 

Two big things seemed to have changed:

  1. Things in the economy make it harder to find jobs (different things in different countries. Perhaps this isn't true for where you live)
  2. AI coding tools are getting pretty good (just wait for March 2022)

 

Also, I've hardly had conversations about this for a few years (maybe 1 per 1-2 months instead of about 5 per week), so I'm less up to date.

 

Also, it seems like nobody really knows what will happen with jobs (and specifically coding jobs) in the age of AI.

If someone can suggest an exam... (read more)

My new default backend recommendation, assuming you are mainly excited about building a website/webapp that does interesting things (as most people I speak to are), and assuming you're happy to put in somewhat more effort but learn things that are (imo) closer to best practices, is Supabase.

 

Supabase mostly handles the backend for you (similarly to Firebase). 

It mostly asks you to configure the DB, e.g "there is a table of students where each row is a student", "there is a table of schools where each row is a school", "the student table has a col... (read more)

Updates from Berkeley 2025:

Google Calendar sync

I can't believe they finally added this feature!

See here: https://app.swapcard.com/settings

Don't forget you can manage your availability

Every time Zvi posts something, it covers everything (or almost everything) important I've seen until then

https://thezvi.substack.com/

Also in audio:

https://open.spotify.com/show/4lG9lA11ycJqMWCD6QrRO9?si=a2a321e254b64ee9

I don't know your own bar for how much time/focus you want to spend on this, but Zvi covers some bar

 

The main thing I'm missing is a way to learn what the good AI coding tools are. For example, I enjoyed this post:

https://www.lesswrong.com/posts/CYYBW8QCMK722GDpz/how-much-i-m-paying-for-ai-productivity-software-and-the

Backend recommendations:

I'm much less confident about this.

  1. If you want a working backend with minimal effort, because actually the React part was the fun thing
    1. Firebase (Firestore) :
      1. This gives you, sort of, an autogenerated backend, if only you describe the structure of your database. I'd mainly recommend this if you're not interested in writing a backend but you still want things to work as if you built an amazing backend.
      2. The main disadvantage is it will be different from what many databases look like.
        1. You could skip the "subscribe for changes" feature and
... (read more)
2
Yonatan Cale
My new default backend recommendation, assuming you are mainly excited about building a website/webapp that does interesting things (as most people I speak to are), and assuming you're happy to put in somewhat more effort but learn things that are (imo) closer to best practices, is Supabase.   Supabase mostly handles the backend for you (similarly to Firebase).  It mostly asks you to configure the DB, e.g "there is a table of students where each row is a student", "there is a table of schools where each row is a school", "the student table has a column called school_id". It can also handle login and permissions just like Firebase.   I think learning to use an SQL database (like Supabase which uses Postgres behind the scenes) is somewhat harder than a no-SQL database (like Firebase), but SQL databases teach more relevant skills. (FYI this is an opinionated take and some might disagree)

Tech stack recommendations:

Many people who want to build a side project want to build a website that does something, or an "app" that does something (and could just be a website that can be opened on a smartphone).

So I want to add recommendations to save some searching:

  1. React. You can learn react from their official tutorial, or from an online course. Don't forget to also learn any prerequisites they mention (but you don't need to invent your own prerequisites)
    1. If you want a React competitor, like Vue, that's also ok (assuming you're more excited about it).
      1. C
... (read more)
2
Yonatan Cale
Backend recommendations: I'm much less confident about this. 1. If you want a working backend with minimal effort, because actually the React part was the fun thing 1. Firebase (Firestore) : 1. This gives you, sort of, an autogenerated backend, if only you describe the structure of your database. I'd mainly recommend this if you're not interested in writing a backend but you still want things to work as if you built an amazing backend. 2. The main disadvantage is it will be different from what many databases look like. 1. You could skip the "subscribe for changes" feature and only use the "read" feature and it will be a bit more realistic, but I don't actually recommend that. 2. If you want to write backend in a way that will push you towards best practices, using a technology that is very popular (and specifically documented well for beginners) 1. Django 2. A big disadvantage is you'd need to learn some pyhton, and you might be reading this because you already built a fun React project and you want to get it done without learning lots of new tech 3. DB: 1. Postgres is the default normal DB for most use cases. SQLite might be easier for local development (on your laptop), and is missing some features that you'll almost surely not notice, so please don't care about them until a concrete problem comes up. Things that automatically override this advice: 1. If you're picking technologies for your startup or something then remember this wasn't written for you. 2. If you have a senior developer who will mentor you if you work on some different technology that they know well, then probably go with them.

My frank opinion is that the solution to not advancing capabilities is keeping the results private, and especially not sharing them with frontier labs.

 

((

making sure I'm not missing our crux completely: Do you agree:

  1. AI has a non negligable chance of being an existential problem
  2. Labs advancing capabilities are the main thing causing that

))

6
Neel Nanda
1 is very true, 2 I agree with apart from the word main, it seems hard to label any factor as "the main" thing, and there's a bunch of complex reasoning about counterfactuals - eg if GDM stopped work that wouldn't stop Meta, so is GDM working on capabilities actually the main thing? I'm pretty unconvinced that not sharing results with frontier labs is tenable - leaving aside that these labs are often the best places to do certain kinds of safety work, if our work is to matter, we need the labs to use it! And you often get valuable feedback on the work by seeing it actually used in production. Having a bunch of safety people who work in secret and then unveil their safety plan at the last minute seems very unlikely to work to me

I also think that a lot of work that is branded as safety (for example, that is developed in a team called the safety-team or alignment-team) could reasonably be considered to be advancing "capabilities" (as the topic is often divided).

My main point is that I recommend checking the specific project you'd work on, and not only what it's branded as, if you think advancing AI capabilities could be dangerous (which I do think).

9
Neel Nanda
I personally think that "does this advance capabilities" is the wrong question to ask, and instead you should ask "how much does this advance capabilities relative to safety". Safer models are just more useful, and more profitable a lot of the time! Eg I care a lot about avoiding deception. But honest models are just generally more useful to users (beyond white lies I guess). And I think it would be silly for no one to work on detecting or reducing deception. I think most good safety work will inherently advance capabilities in some sense, and this is a sign that it's actually doing anything real. I struggle to think of any work I think is both useful and doesn't advance capabilities at all

Zvi on the 80k podcast:

Zvi Mowshowitz: This is a place I feel very, very strongly that the 80,000 Hours guidelines are very wrong. So my advice, if you want to improve the situation on the chance that we all die for existential risk concerns, is that you absolutely can go to a lab that you have evaluated as doing legitimate safety work, that will not effectively end up as capabilities work, in a role of doing that work. That is a very reasonable thing to be doing.

I think that “I am going to take a job at specifically OpenAI or DeepMind for the purposes of

... (read more)
2
Yonatan Cale
I also think that a lot of work that is branded as safety (for example, that is developed in a team called the safety-team or alignment-team) could reasonably be considered to be advancing "capabilities" (as the topic is often divided). My main point is that I recommend checking the specific project you'd work on, and not only what it's branded as, if you think advancing AI capabilities could be dangerous (which I do think).

Something like "noticing we are surprised". Also I think it would be nice to have prediction markets for studies in general and EA seem like early adopters (?)

I don't know why this was so downvoted/Xed 

:/

I'm very confused why you were downvoted, this seems like an obviously good idea with no downsides I can see - it makes it much clearer whether a result is surprising or not and prevents hindsight bias

0
Guy Raveh
I downvoted and disagreevoted, though I waited until you replied to reassess. I did so because I see absolutely no gain from doing this, I think the opportunity cost means it's net negative, and I oppose the hype around prediction markets - it seems to me like the movement is obsessed with them but practically they haven't led to any good impact. Edit: regarding 'noticing we are surprised' - one would think this result is surprising, otherwise there'd be voices against the high amount of funding for EA conferences?

Hey :)

 

Looking at some of the engineering projects (which is closest to my field) :

  • API Development: Create a RESTful API using Flask or FastAPI to serve the summarization models.
  • Caching: Implement a caching mechanism to store and quickly retrieve summaries for previously seen papers.
  • Asynchronous Processing: Use message queues (e.g., Celery) for handling long-running summarization tasks.
  • Containerization: Dockerize the application for easy deployment and scaling.
  • Monitoring and Logging: Implement proper logging and monitoring to track system performance
... (read more)
2
jacquesthibs
I just saw this; thanks for sharing! Yup, some of these should be able to be solved quickly with LLMs.

If you ever run another of these, I recommend opening a prediction market first for what your results are going to be :) 

cc @Nathan Young 

9
Guy Raveh
What do you think can be gained from that?

Any idea if these capabilities were made public or, for example, only used for private METR evals?

5
defun 🔸
In the case of OpenDevin it seems like the grant is directly funding an open-source project that advances capabilities. I'd like more transparency on this.

I'm not sure how to answer this so I'll give it a shot and tell me if I'm off:

 

Because usually they take more time, and are usually less effective at getting someone hired, than:

  1. Do an online course
  2. Write 2-3 good side projects

 

For example, in Israel pre-covid, having a CS degree (which wasn't outstanding) was mostly not enough to get interviews, but 2-3 good side projects were, and the standard advice for people who finished degrees was to go do 2-3 good side projects. (based on an org that did a lot of this and hopefully I'm representing correctl... (read more)

  1. If Conor thinks these roles are impactful then I'm happy we agree on listing impactful roles. (The discussion on whether alignment roles are impactful is separate from what I was trying to say in my comment)
  2. If the career development tag is used (and is clear to typical people using the job board) then - again - seems good to me.
3
Rebecca
I’m still confused about what the misunderstanding is

My own intuition on what to do with this situation - is to stop trying to change your reputation using disclaimers. 

There's a lot of value in having a job board with high impact job recommendations. One of the challenging parts is getting a critical mass of people looking at your job board, and you already have that.

Hey Conor!

Regarding

we don’t conceptualize the board as endorsing organisations.

And

 contribute to solving our top problems or build career capital to do so

It seems like EAs expect the 80k job board to suggest high impact roles, and this has been a misunderstanding for a long time (consider looking at that post if you haven't). The disclaimers were always there, but EAs (including myself) still regularly looked at the 80k job board as a concrete path to impact.

I don't have time for a long comment, just wanted to say I think this matters.

I don't read those two quotes as in tension? The job board isn't endorsing organizations, it's endorsing roles. An organization can be highly net harmful while the right person joining to work on the right thing can be highly positive.

I also think "endorsement" is a bit too strong: the bar for listing a job shouldn't be "anyone reading this who takes this job will have significant positive impact" but instead more like "under some combinations of values and world models that the job board runners think are plausible, this job is plausibly one of the highest impact opportunities for the right person".

4
Rebecca
What are the relevant disclaimers here? Conor is saying 80l does think that alignment roles at OpenAI are impactful. Your article mentions the career development tag, but the roles under discussion don’t have that tag right?

My own intuition on what to do with this situation - is to stop trying to change your reputation using disclaimers. 

There's a lot of value in having a job board with high impact job recommendations. One of the challenging parts is getting a critical mass of people looking at your job board, and you already have that.

So when a person gather sticks from the forest for their own use — that counts as ‘consumption’

How could one measure consumption that includes things like this? And how would you pick a dollar value for how much the stick gathering was worth?

Paul Graham about getting good at technology (bold is mine):

How do you get good at technology? And how do you choose which technology to get good at? Both of those questions turn out to have the same answer: work on your own projects. Don't try to guess whether gene editing or LLMs or rockets will turn out to be the most valuable technology to know about. No one can predict that. Just work on whatever interests you the most. You'll work much harder on something you're interested in than something you're doing because you think you're supposed to.

If you're

... (read more)

Linking to Zvi's review of the podcast:

https://thezvi.wordpress.com/2024/04/15/monthly-roundup-17-april-2024/

Search for:

Will MaCaskill went on the Sam Harris podcast

 

It's a negative review, but opinions are Zvi's, I didn't hear the podcast myself.

do you have a rough guess at what % this is a deal breaker for?

It's less of "%" and more of "who will this intimidate".

Many of your top candidates will (1) currently be working somewhere, and (2) will look at many EA aligned jobs, and if many of them require a work trial then that could be a problem.

(I just hired someone who was working full time, and I assume if we required a work trial then he just wouldn't be able to do it without quitting)

 

Easy ways to make this better:

  1. If you have flexibility (for example, whether the work trial is local or remote
... (read more)

I recommend adding "Sam Altman" to the title, it can act as a TLDR. The current phrasing has a bit of a "click here to know more" vibe for me (like an ad) (probably unintentionally)

3
Will Howard🔹
Personally I think the other members are actually the bigger news here, seeing as Sam being added back seemed like a foregone conclusion (or at least, the default outcome, and him not being added back would have been news). But anyway, my goal was just to link to the post without editorialising too much so that people can discuss it on the forum. For this I think a policy of copying the exact title from the article is good in general.

1.a and b.

I usually ask for feedback, and often it's something like “Idk, the vibe seemed off somehow. I can't really explain it.” Do you know what that could be?

This sounds like someone who doesn't want to actually give you feedback, my guess is they're scared of insulting you, or being liable to something legal, or something like that.

My focus wouldn't be on trying to interpret the literal words (like "what vibe") but rather making them comfortable to give you actual real feedback. This is a skill in itself which you can practice. Here's a draft to maybe... (read more)

5
Dawn Drescher
Oh, interesting… I'm autistic and I've heard that autistic people give off subtly weird “uncanny valley”–type vibes even if they mask well. So I mostly just assume it's that. Close friends of mine who surely felt perfectly free to tell me anything were also at a loss to describe it. They said the vibes were less when I made a ponytail rather than had open hair, but they couldn't describe it. (Once I transition more, I hope people will just attribute the vibes to my probably-unfortunately-slightly-imperfect femininity and not worry about it. ^.^ I just need to plant enough weirdness lightning rods. xD)  But he was US-based at the time, and I've heard employers in the US are much more careful with giving feedback than around here, so maybe it was just guardedness in that case. I like your template! I remember another series of interviews where I easily figured out what the problems were (unless they were pretenses). I think I'm quite attuned (by dint of social anxiety) to subtle indications of disappointment and such. When I first mentioned earning to give in an interview, I noticed a certain hesitancy and found out that it's because the person was looking for someone who has an intrinsic motivation for building hardware for supply chain optimization rather than someone who does it for the money. But in other cases I'm clueless, so the template can come into action! Oh yes, I love this! I think I've done this in virtually every interview simply because I actually didn't know something. One interviewer even asked me whether I know the so-and-so design pattern. I asked what that is, and then concluded that I had never heard of it. Good call too, because that thing turned out to be ungoogleable. Idk whether he made it up or whether it was an invention of his CS professor, but being transparent about such things has served me well. :-D I think for me it's mostly about what the other people in the room will think about me, not about consequences for me. I'm also afraid

I would be interested in something like this existing for Israel

I have thoughts on how to deal with this. My priors are this won't work if I communicate it through text (but I have no idea why). Still, seems like the friendly thing would be to write it down

 

My recommendation on how to read this:

  1. If this advice fits you, it should read as "ah obviously, how didn't I think of that?". If it reads as "this is annoying, I guess I'll do it, okay...." - then something doesn't fit you well, I missed some preference of yours. Please don't make me a source of annoying social pressure
  2. Again, for some reason this works better w
... (read more)
3
Richard_Leyba_Tejada
"The goal of interviews is not to pass them (that's the wrong goal, I claim). The goals I recommend are: 1. Reducing uncertainty regarding what places will accept you. (so you should get many rejections, it's by-design, otherwise you're not searching well)" I get very anxious the closer I am to interview day. I started doing mock interviews to practice.    Shifting to reducing uncertainty/research vs passing seems helpful.
4
Dawn Drescher
1.a. and b.: Reframing it like that sounds nice! :-D Seems like you solved your problem by getting shoes that are so cool, you never want to take them off! (I so wouldn't have expected someone to have a problem with that though…) I usually ask for feedback, and often it's something like “Idk, the vibe seemed off somehow. I can't really explain it.” Do you know what that could be? 2. I'm super noncompetitive… When it comes to EA jobs, I find it reassuring that I'm probably not good at making a good first impression because it reduces the risk that I replace someone better than me. But in non-EA jobs I'm also afraid that I might not live up to some expectations in the first several weeks when I'm still new to everything. 3. Haha! Excellent! I should do that more. ^.^ 4. You mean as positive reinforcement? I could meet with a friend or go climbing. :-3 5. Aw, yes, spot on. I spent a significant fraction of my time over the course of 3–4 months practicing for Google interviews, and then never dared to apply anyway (well, one recruiter stood me up and I didn't try again with another). Some of the riddles in Cracking the Coding Interview were so hard for me that I could never solve them in 30 minutes, and that scared me even more. Maybe I should practice minimally next time to avoid that. Thank you so much for all the tips! I think written communication works perfectly for me. I don't actually remember your voice well enough to imagine you speaking the text, but I think you've gotten everything across perfectly? :-D I'll only pounce on amazing opportunities for now and continue GoodX fulltime, but in the median future I'll double down on the interviewing later in 2024 when our funds run out fully. Then I'll let you know how it went! (Or I hope I'll remember to!) For now I have a bunch more entrepreneurial ideas that I want to have at least tried. :-3

Seems to me from your questions that your bottle neck is specifically finding the interview process stressful.

I think there's stuff to do about that, and it would potentially help with lots of other tradeoffs (for example, you'd happily interview in more places, get more offers, know what your alternatives are, ..)

wdyt?

4
Dawn Drescher
That makes a lot of sense! I've been working on that, and maybe my therapist can help me too. It's gotten better over the years, but I used to feel intense shame over mistakes I made or might've made for years after such situations, so that I'm still afraid of my inner critic. Plus I feel rather sick on interview days, which is probably the stress.

TL;DR: The orgs know best if they'd rather hire you or get the amount you'd donate. You can ask them.

I'd apply sometimes, and ask if they prefer me or the next best candidate plus however much I'd donate. They have skin in the game and an incentive to answer honestly. I don't think it's a good idea to try guessing this alone

 

I wrote more about this here, some orgs also replied (but note this was some time ago)

 

(If you're asking for yourself and not theoretically - then I'd ask you if you applied to all (or some?) of the positions that you think a... (read more)

6
Dawn Drescher
Thanks! Yeah, I've included that in the application form in one or two cases in the hope it'll save time (well, not only time – I find interview processes super stressful, so if I'm going to get rejected or decline, I'd like (emotionally) for that to happen as early as possible) but I suppose that's too early. I'll ask about it later like you do. I haven't gotten so far yet with any impact-focused org.

The main reason for this decision is that I failed to have (enough) direct impact.

 

Also, I was working on vague projects (like attempting AI Safety research), almost alone (I'm very social), with unclear progress, during covid, this was bad for my mental health.

 

Also, a friend invited me to join working with him, I asked if I could do a two week trial period first, everyone said yes, it was really great, and the rest is (last month's) history

Yeah, I think maybe seeing a post like this would have helped me transition earlier too, now that you say so

I might disagree with this. I know, this is controversial, but hear me out (and only then disagree-vote :P )

 

So,

  1. Some jobs are 1000x+ more effective than the "typical" job. Like charities
  2. So picking one of the super-impactful ones matters, compared to the rest. Like charities
  3. But picking something that is 1x or 3x or 9x doesn't really matter, compared to the 1000x option. (like charities)
  4. Sometimes people go for a 9x job, and they sacrifice things like "having fun" or "making money" or "learning" (or something else that is very important to them). This is
... (read more)
2
Dawn Drescher
Haha! Where exactly do you disagree with me? My mind autocompleted that you'd proffer this objection:  If you work for a 9x job, chances are that you're in an environment where most employees are there for altruistic reasons but prioritize differently so that they believe that the job is one of the best things you can do. Then you'll be constantly exposed to social pressure to accept a lower salary, less time off, more overtime, etc., which will cut into the donations, risks burnout, and reduces opportunities to learn new skills. What do you think? I'm a bit worried about this too and would avoid 9x jobs where I suspect this could happen. But having a bunch of altruistic colleagues sounds great otherwise. :-D I think I will need to aim for something a bit above background economic growth levels of good to pacify my S1 in the long run. ^.^

I quit trying to have direct impact and took a zero-impact tech job instead.

I expected to have a hard time with this transition, but I found a really good fit position and I'm having a lot of fun.

I'm not sure yet where to donate extra money. Probably MIRI/LTFF/OpenPhil/RethinkPriorities.

I also find myself considering using money to try fixing things in Israel. Or maybe to run away first and take care things and people that are close to me. I admit, focusing on taking care of myself for a month was (is) nice, and I do feel like I can make a difference with E2G.

(AMA)

-6
Peter Wildeford
2
jknowak
What were the main reasons for this decision? Was this motivated by how much you could earn in a typical zero-impact tech job? I mean - would you still "quit trying to have direct impact" if your zero-impact tech job wouldn't leave you with much extra money to donate?
2
Linch
Congrats Yonatan! Good luck with your work and I hope you stay safe out there!
2
ChrisSmith
Thanks for sharing! I occasionally worry that I'd struggle emotionally to go back to E2G/most of my impact being via donations, so this is a helpful anecdatum.
5
Dawn Drescher
Yeah, ETG seems really strong to me at the moment! What do you think is a good threshold for the average EA in terms of annual USD donations that they can make at which they should seriously consider ETG? 
5
Ozzie Gooen
Congrats Yonatan! Good luck deciding where to donate! Seems like there are a lot of good options now. 

Thank you very much for splitting this up into sections in addition to posting the linkpost itself

3
T_W
Anytime :) I didn't do much, but glad to know it was helpful because I was debating whether to continue trying to organize for future stuff

Hey, is it a reasonable interpretation that EAIF is much much more interested in growing EA than in supporting existing EAs?

(I'm not saying this is a mistake)

 

P.S

Here are the "support existing EAs" examples I saw:

  • "[funding a] PhD student to attend a one-month program" [$100k tier] - this seems like a very different grant than the other examples, I'm even surprised to see this under EAIF rather than LTFF
  • "A shared workspace for the EA community" [$5M tier] - totally supports existing EAs
  • "an open-source Swapcard alternative" [$10M tier] - I'm surprised this isn't under CEA

Hey, just saying explicitly that I linked to opinions of other people, not my own.

(and I'm suggesting that you reply there if you have questions for them)

2
NickLaing
Thanks, I'm interested that you think occupation is a real possibility, with another leadership to take over control - that would mean a battle and complete takeover I suppose, its hard to imagine Hamas surrendering easily...

AMA about Israel here:
https://www.lesswrong.com/posts/zJCKn4TSXcCXzc6fi/i-m-a-former-israeli-officer-ama

Against "the burden of proof is on X"

Instead, I recommend: "My prior is [something], here's why".

I'm even more against "the burden of proof for [some policy] is on X" - I mean, what does "burden of proof" even mean in the context of policy? but hold that thought.

 

An example that I'm against:

"The burden of proof for vaccines helping should be on people who want to vaccinate, because it's unusual to put something in your body"

I'm against it because 

  1. It implicitly assumes that vaccines should be judged as part of the group "putting something in your
... (read more)
1
titotal
So, I'll give two more examples of how burden of proof gets used typically: 1. You claim that you just saw a unicorn ride past. I say that the burden of proof is on you to prove it, as unicorns do not exist (as far as we know).  2. As prime minister, you try and combat obesity by taxing people in proportion to their weight. I say that the burden of proof is on you to prove that such a policy would do more good than harm. I think in both these cases, the statements made are quite reasonable. Let me try to translate the objections into your language: 1. my prior of you seeing a unicorn is extremely low, because unicorns do not exist (as far as we know) 2. My prior of this policy being a good idea is low, because most potential interventions are not helpful.  These are fine, but I'm not sure I prefer either of these. It seems like the other party can just say "well my priors are high, so I guess both our beliefs are equally valid".  I think "burden of proof" translates to "you should provide a lot of proof for your position in order for me or anyone else to believe you". It's a statement of what peoples priors should be. 
1
Azad Ellafi
I've always viewed burden of proof as a dialectical tool. To say one has the burden proof is to say that if they meet the following set of necessary and jointly sufficient conditions: 1. You've made a claim 2. You're attempting to convince another of the claim. They have the obligation in the discussion to provide justification for the claim. If (1) isn't the case, then of course you don't have any burden to provide justification. If (2) isn't the case (Say, everyone already agrees with the claim or someone just wants your opinion on something) it's not clear to me you have some obligation to provide justification either. On this account, it's not like burden of proof talk favors a side. And I'm not sure it implicitly assumes anything or is a conversation stopper. So maybe we can "keep burden of proof talk" by using this construal while also focusing more on explicit discussion of priors. Idk, just a thought I had while reading this.

I agree that the question of "what priors to use here" is super important.

For example, if someone would chose priors for "we usually don't bring new more intelligent life forms to live with us, so the burden of proof is on doing so" - would that be valid?

Or if someone would say "we usually don't enforce pauses on writing new computer programs" - would THAT be valid?

imo: the question of "what priors to use" is important and not trivial. I agree with @Holly_Elmore that just assuming the priors here is skipping over some important stuff. But I disagree that "... (read more)

6
Holly Elmore ⏸️ 🔸
*As far as my essay (not posted yet) was concerned, she could have stopped there, because this is our crux.
5
Davidmanheim
In a debate, which is what was supposed to be happening, the point is to make claims that either support or refute the central claim. That's what Holly was pointing out - this is a fundamental requirement for accepting Nora's position. (I don't think that this is the only crux - "AI Safety is gonna be easy" and "AI is fully understandable" are two far larger cruxes, but they largely depend on this first one.)

Hey Alex :)

1.

I don't think it's possible to write a single page that gives the right message to every user

My own attempt to solve this is to have the article MAINLY split up into sections that address different readers, which you can skip to.

 

2.

the second paragraph visible on that page is entirely caveat.

2.2. [edit: seems like you agree with this. TL;DR: too many caveat already] My own experience from reading EA material in general, and 80k material specifically, is that there is going to be lots of caveat which I didn't (and maybe still don't) know h... (read more)

2
alex lawsen
I don't think it's worth me going back and forth on specific details, especially as I'm not on the web team (or even still at 80k), but these proposals are different to the first thing you suggested. Without taking a position on whether this structure would overall be an improvement, it's obviously not the case that just having different sections for different possible users ensures that everyone gets the advice they need. For what it's worth, one of the main motivations for this being an after-hours episode, which was promoted on the EA forum and my twitter, is that I think the mistakes are much more common among people who read a lot of EA content and interact with a lot of EAs (which is a small fraction of the 80k website readership). The hope is that people who're more likely than a typical reader to need the advice are the people most likely to come across it, so we don't have to rely purely on self-selection.

voted for calendar sync, may the world be sane again!!

Load more