235 karmaJoined Dec 2020Budapest, Kelenföld, Magyarország


Hey there! I'm Gergő, the founder of EA Hungary and Budapest AI Safety. 

If you would like to connect with the Hungarian EA/AIS communities please feel free to message me! We host lots of events! :)


Leave anonymous feedback on me here:


Anonymous feedback to EA Hungary here:



Experiments in Local Community Building


Hey Andreas! Thanks for writing this up, it was a really interesting read and I'm glad you shared it! 

Some quick rambling thoughts after reading:

I think some of the distinctions might be semantic - some of what you describe would fall under misuse risk/malicious use, which could indeed be a real problem (If an AI is causing harm because it its values are aligned with a malicious human, is it aligned or misaligned overall? I'm not sure, but the human alignment problem seems to be the issue here) - and I'm not sure how to weight that against the risk of unaligned AI. I think given that we are nowhere close to solving the alignment problem, people tend to assume that if we have AGI, it will be misaligned "by definition". In terms of s-risks, I would really recommend checking out the work of CLR, as they seem to be the ones who spent most time thinking about s-risks.  I think they also have a course on s-risks coming up sometime!

Sorry if I'm asking something obvious, but is it fair to assume that all of these internships require US citizenship? (According to GPT there could be exceptions so I thought I would ask you just to be sure)

Thanks for flagging these great opportunities!

Thank you for writing this post, I think I learnt a lot from it (including about things I didn't expect I would, such as waste sites and failure modes in cryonics advocacy - excellent stuff!).

Question for anyone to chip in on:

I'm wondering whether if we're to make the "conditional pause system" the post is advocating for universal, would it imply that the alignment community needs to drastically scale up (in terms of quantity of researchers) to be able to do similar work to what ARC Evals is doing? 

After all, someone would actually need to check if systems at a given capability are safe, and as the post argues, you would not want AGI labs to do it for themselves. However, if all current top labs were to start throwing their cutting-edge models at ARC Evals, I imagine they would be quite overwhelmed. (And the demand for evals would just increase over time)

I could see this being less of an issue if the evaluations only need to happen for the models that are really the most capable at a given point in time, but my worry would be that as capabilities increase, even if we test the top models rigorously, the second-tier models could still end up doing harm. 

(I guess it would also depend on whether you can "reuse" some of the insights you gain from evaluating the top models at a given time on the second-tier models at a given time, but I certainly don't know enough about this topic to know if that would be feasible)

Hey there! Sounds like a great opportunity! Is there an event calendar available? :)

You might be right! My impression was based on talking to a handful of people within community building, about fellowship programs specifically - that might be what explains our different impressions (although I'm sure there are plenty of people who are excited about paid ads within this niche too!)

I plan to write about my experience with buying social media ads in more detail, but I thought I would share some quick thoughts beforehand:

Addressing the elephant in the room

I want to address the general scepticism I sometimes encountered (and used to have) about using paid ads for outreach. I think we have some vague intuition that says, "the type of people who click on ads are not smart or cool”. I want to say that this has not been our experience. A lot of people who joined our programs this way are very talented, motivated and open-minded.

A preliminary look at cost-effectiveness:

-I looked at 5 of our social media campaigns promoting our EA/AIS programs, with an overall spending of 1019 USD.

  • -These campaigns got us 34 program applicants (~30 USD per applicant).
    • By applicant here I mean people who sign up for the course, but don’t necessarily start it (eg. show up to the first session)
    • Roughly speaking, our experience has been that some of the people don’t start the program (and we might never hear from them)
    • But those who do are quite motivated, and talented and are more likely to become engaged with our community than people we get from other sources
  •  ~15 applicants we got through paid ads became engaged with the group and I find them quite promising (~68 USD per applicant)
    • Of course, it is hard to tell whether someone is going to engage with us long-term, and I certainly wouldn’t claim that all of our applicants will definitely end up pursuing a high-impact career (Not to mention all the biases in assessing who we think are promising)
    • By promising I mean something like “new memebers I’m really excited to have in our group and very happy to support their exploration of EA and AIS”
    • Even if only 5% of these people would in fact end up in a high-impact role, I still think paid ads would be worth it

Some caveats

  • There were big differences in the campaign’s cost-effectiveness in attracting highly motivated participants
  • The cost-effectiveness of general applications was between 12-47 USD
  • The cost-effectiveness of highly motivated applications was between 20-142 US

Next steps

  • I still need to figure out the reasons behind the big difference in cost-effectiveness and overall take a better look at all our data
  • I will eventually make a longer writeup, with guides on how to make social media ads (assuming I will still think it is worth doing)

If you have data or anecdotes to share about your own experience of using ads or want to give feedback please feel free to comment or shoot me an email at gergo@eahungary.com

I don't know enogh about your situation to give a confident suggestion, but sounds like you could benefit a lot from talking to 80k, if you haven't done already! (Altough it might take some time, I'm not sure about their current capacity)

Hey there! Thanks for sharing this! I also just wanted to flag that the hyperlink of "We asked our WANBAM community what resources they would recommend. Here’s what they said" no longer seems to work (for me)!

Load more