All of Esben Kran's Comments + Replies

Low-key Longtermism

Low-key longtermism seems like a superb framing to me. Existing framings seem to have significant risk related to radicalization, elitism, and estrangement that you also touch upon.

Framing it for the grandkids is a great idea since it both avoids longtermism and appeals to basically everyone. There might be risks of non-specificity so we'll probably need to experiment with different wordings, though this seems like a appealing starting point.

Especially when explaining longtermism to the parents et al.

[disclaimer: I work with Jonathan]

2Jonathan Rystrom16d
Thanks for your kind words, Esben! If anything comes out of this post, I agree that it should be a renewed focus on better framings - though James does raise some excellent points at the cost-effectiveness of this approach :))
Announcing giveffektivt.dk

Thank you very much for organizing this and I have of course donated in all the ways possible! Good luck and great work, Jonas and co. I'm excited to see the results.

3Jonas Lindeløv2mo
Thank you, Esben! We have a great team, so it looks like we can put a lot of continued effort into maximizing the effect of charitable giving in Denmark.
Everyone - show us your numbers

Based on our previous conversations, I'm curious if you mean the /open page or aisafetyideas.com. For the /open; no, I have not talked with any major orgs about this and this is not a tool we will be offering. For aisafetyideas.com, the focus on that is the result of a lot of interviews and discussions with people in the field[1] :)

  1. ^

    Though it is also not the only thing we are focusing on at the moment!

4Yonatan Cale2mo
Ah, I was talking about the current post, "/open"
Are you really in a race? The Cautionary Tales of Szilárd and Ellsberg

I wonder if you have some addendums to the point of secrecy and the AI safety and EA community's thoughts about info hazards. Are we building a community that automatically believes in both the risks and the competition being higher because organizations (e.g. MIRI) shout wolf while keeping why they shout wolf relatively secret (i.e. their experiments in making aligned AI).  I don't know what my own opinion on this is, but would you argue for a more open policy, given these insights?

4HaydnBelfield2mo
Thanks for this. I'm more counselling "be careful about secrecy" rather than "don't be secret". Especially be careful about secret sprints, being told you're in a race but can't see the secret information why, and careful about "you have to take part in this secret project". On the capability side, the shift in AI/ML publication and release norms towards staged release (not releasing full model immediately but carefully checking for misuse potential first), structured access (through APIs) and so on has been positive, I think. On the risks/analysis side, MIRI have their own “nondisclosed-by-default” policy [https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/#section3] on publication. CSER and other academic research groups tend towards more of a "disclosed-by-default” policy.
Some unfun lessons I learned as a junior grantmaker

A wonderful post and thank you for sharing this inside view, Linch! 

"I think that more than half of the impact I have via my EAIF grantmaking is through the top 25% of the grants I make.  And I am able to spend more time on making those best grants go better, by working on active grantmaking or by advising grantees in various ways." - Buck

It's very interesting to see the correlates between VC funding and grants from EA organizations (and both can probably learn from each other). As Reid Hoffman mentions in Blitzscaling (I cannot find the quote a.... (read more)

Everyone - show us your numbers

Glad you like it! I can imagine a few but usually from the platforms I've heard, it sticks at points of existing data infrastructure setup and data/software capabilities to build it. I hope that emphasizing the value of an impact writeup mitigates this, however.

Help us find pain points in AI safety

We welcome anyone to answer the survey and people who would describe themselves as "associated to AI safety research" in any capacity.

EA on r/Place: An Art Project Post-mortem

This is absurdly awesome and it's wonderful to read about this comradery! As Ben Brown alludes to, I believe it has a disproportionately large positive effect compared to what many might think from its strict output. So really a good job to you all!

New: use The Nonlinear Library to listen to the top EA Forum posts of all time

I appreciate this service a lot and use it basically every day! So thank you for making it happen 💪🏽 It's awesome to see some alternative options to consume EA content than surfing through the forums and it's even more awesome when that option just requires listening!

I’m Offering Free Coaching for Software Developers in the EA community

I had an absolutely wonderful conversation with Yonatan and one in which I was quite surprised at how effectively we could debug what I actually wanted and find interesting. As an entrepreneur, it was especially refreshing to get a very no-BS, "consultancy"-like therapy session and I can see this approach help an immeasurable amount for many people. And for that matter, I'd like to learn it myself! 😊 

Why fun writing can save lives: the case for it being high impact to make EA writing entertaining

I couldn't agree more. It reminds me of Orwell's "Politics and the English Language" and his guidelines for good writing:

(i) Never use a metaphor, simile, or other figure of speech which you are used to seeing in print.
(ii) Never use a long word where a short one will do.
(iii) If it is possible to cut a word out, always cut it out.
(iv) Never use the passive where you can use the active.
(v) Never use a foreign phrase, a scientific word, or a jargon word if you can think of an everyday English equivalent.
(vi) Break any of these rules sooner than say anything outright barbarous.

[emphasis mine]

There should be an AI safety project board

Great proposal! I was just sent this post today but we are already working on something along these lines and will publish more during the coming weeks about it (including the web app part). 

We are currently working on a spreadsheet-based MVP that I'll share very soon on which we are getting feedback on the structure and content. If anyone wants to either be part of a user interview, write down their ideas or just discuss the idea platform itself, my calendar is wide open and you can send me an email at esben@apartresearch.com at any time.

Additionally... (read more)