Low-key longtermism seems like a superb framing to me. Existing framings seem to have significant risk related to radicalization, elitism, and estrangement that you also touch upon.
Framing it for the grandkids is a great idea since it both avoids longtermism and appeals to basically everyone. There might be risks of non-specificity so we'll probably need to experiment with different wordings, though this seems like a appealing starting point.
Especially when explaining longtermism to the parents et al.
[disclaimer: I work with Jonathan]
Thank you very much for organizing this and I have of course donated in all the ways possible! Good luck and great work, Jonas and co. I'm excited to see the results.
Based on our previous conversations, I'm curious if you mean the /open page or aisafetyideas.com. For the /open; no, I have not talked with any major orgs about this and this is not a tool we will be offering. For aisafetyideas.com, the focus on that is the result of a lot of interviews and discussions with people in the field :)
Though it is also not the only thing we are focusing on at the moment!
I wonder if you have some addendums to the point of secrecy and the AI safety and EA community's thoughts about info hazards. Are we building a community that automatically believes in both the risks and the competition being higher because organizations (e.g. MIRI) shout wolf while keeping why they shout wolf relatively secret (i.e. their experiments in making aligned AI). I don't know what my own opinion on this is, but would you argue for a more open policy, given these insights?
A wonderful post and thank you for sharing this inside view, Linch!
"I think that more than half of the impact I have via my EAIF grantmaking is through the top 25% of the grants I make. And I am able to spend more time on making those best grants go better, by working on active grantmaking or by advising grantees in various ways." - Buck
It's very interesting to see the correlates between VC funding and grants from EA organizations (and both can probably learn from each other). As Reid Hoffman mentions in Blitzscaling (I cannot find the quote a.... (read more)
Glad you like it! I can imagine a few but usually from the platforms I've heard, it sticks at points of existing data infrastructure setup and data/software capabilities to build it. I hope that emphasizing the value of an impact writeup mitigates this, however.
We welcome anyone to answer the survey and people who would describe themselves as "associated to AI safety research" in any capacity.
This is absurdly awesome and it's wonderful to read about this comradery! As Ben Brown alludes to, I believe it has a disproportionately large positive effect compared to what many might think from its strict output. So really a good job to you all!
I appreciate this service a lot and use it basically every day! So thank you for making it happen 💪🏽 It's awesome to see some alternative options to consume EA content than surfing through the forums and it's even more awesome when that option just requires listening!
I had an absolutely wonderful conversation with Yonatan and one in which I was quite surprised at how effectively we could debug what I actually wanted and find interesting. As an entrepreneur, it was especially refreshing to get a very no-BS, "consultancy"-like therapy session and I can see this approach help an immeasurable amount for many people. And for that matter, I'd like to learn it myself! 😊
I couldn't agree more. It reminds me of Orwell's "Politics and the English Language" and his guidelines for good writing:
(i) Never use a metaphor, simile, or other figure of speech which you are used to seeing in print.(ii) Never use a long word where a short one will do.(iii) If it is possible to cut a word out, always cut it out.(iv) Never use the passive where you can use the active.(v) Never use a foreign phrase, a scientific word, or a jargon word if you can think of an everyday English equivalent.(vi) Break any of these rules sooner than say anything outright barbarous.
Great proposal! I was just sent this post today but we are already working on something along these lines and will publish more during the coming weeks about it (including the web app part).
We are currently working on a spreadsheet-based MVP that I'll share very soon on which we are getting feedback on the structure and content. If anyone wants to either be part of a user interview, write down their ideas or just discuss the idea platform itself, my calendar is wide open and you can send me an email at firstname.lastname@example.org at any time.
Additionally... (read more)