plex

@ Independent, 14+ AI Safety projects
305 karmaJoined Aug 2022
plex.ventures/

Comments
50

plex
3mo10

Ops, forgot to share edit access, I sent you an invitation to the subfolder so you should be able to move it now. Can also copy if you'd prefer, but I think having one canonical version is best.

plex
3mo30

This seems super useful! Would you be willing to let Rob Miles's aisafety.info use this as seed content? Our backend is already in Google Docs, so if you moved those files to this drive folder we could rename them to have a question-shaped title and they'd be synced in and kept up to date by our editors, or we could copy these if you'd like to have your original separate.

plex
4mo64

I'm curious about what the thing you call EigenKarma is, is it the way people with more karma have more weighty votes? Or is it something with a global eigenvector?

plex
4mo20

I have put some thought into the privacy aspect, and there are ways to make it non-trivial or even fairly difficult to extract someone's trust graph, but nothing which actually hides it perfectly. That's why the network would have to be opt-in, and likely would not cover negative votes.

I'd be interested to hear the unpacked version of your worries about "gatekeeping, groupthink and polarisation".

plex
4mo54

Topically, this might be a useful part of a strategy to help the EA forum to stay focused on the most valuable things, if people had the option to sync their vote own history with the EigenKarma Network and use EKN lookup scores to influence the display and prioritization of posts on the front page. We'd be keen to collaborate with the EAF team to make this happen, if the community is excited.

plex
4mo21

Awesome! I'm glad to see these two communities connecting, I think there's a lot of potential for cross-pollination. You might be interested in this thread, and this post. I'd love to see these ideas popularized in EA, as I think people who have flexibility and agency can achieve great things.

The tool looks great, one little suggestion is to merge the cells with the link and the ones to the right of it:

so that the link is clickable :)

plex
4mo10

My approach to addressing "AI can be technical but it’s not clear how much of that you need to know: AKA "But I don’t program"" is running monthly Alignment Ecosystem Development Opportunity Calls, where I and others can quickly pitch lots of different technical and non-technical projects to improve the alignment ecosystem.

And for "There’s a lot of jargon and it’s not always well explained: AKA "Can you explain that again…but like I’m 5"",  aisafety.info is trying to solve this, along with upping Rob Miles's output by bringing on board research volunteers.

plex
4mo93

Seems worthwhile and good for an extra reason: It allows us to train AIs on only data from before there was significant AI generated content, which might mitigate some safety concerns around AIs influencing future AIs training.

plex
4mo10

My understanding is that they asked someone to register the domains for things that had an EAF tag entry, which accidentally included some which didn't really make sense. The full list includes a bunch of names of individuals, which were removed from EA domains.

I think ideally we should reach out to all individuals and orgs who CEA bought domains about and offer them, but it isn't a super high priority for me.

plex
5mo20

Thanks! And yeah, it's an incredibly powerful tool too! It's a relational database integrated with an awesome editor and scripting engine, but all absurdly easy to use. The homepage is actually carrd.co, though, with coda embedded.

You can add them via the form, or you can DM me or post here request a bulk addition them if there are more than is easy to add by a form. You'll remain the owner of them, unless you want to ask Ben West for CEA to be the custodian.

Load more