plex

@ Independent, 14+ AI Safety projects
364 karmaJoined Aug 2022
plex.ventures/

Comments
55

AI Safety Support has been for a long time a remarkably active in-the-trenches group patching the many otherwise gaping holes in the ecosystem (someone who's available to talk and help people get a basic understanding of the lie of the land from a friendly face, resources to keep people informed in ways which were otherwise neglected, support around fiscal sponsorship and coaching), especially for people trying to join the effort who don't have a close connection to the inner circles where it's less obvious that these are needed.

I'm sad to see the supporters not having been adequately supported to keep up this part of the mission, but excited by JJ's new project: Ashgro.

I'm also excited by AI Safety Quest stepping up as a distributed, scalable, grassroots version of several of the main duties of AI Safety Support, which are ever more keenly needed with the flood of people who want to help as awareness spreads.

running a big AI Alignment conference

Would you like the domain aisafety.global for this? It's one of the ones I collected on ea.domains which I'm hoping someone will make use of one day.

Disagree with example. Human teenagers spend quite a few years learning object recognition and other skills necessary for driving before driving, and I'd bet at good odds that a end-to-end training run of a self-driving car network is shorter than even the driving lessons a teenager goes through to become proficient at a similar level to the car. Designing the training framework, no, but the comparator there is evolution's millions of years so that doesn't buy you much.

Your probabilities are not independent, your estimates mostly flow from a world model which seem to me to be flatly and clearly wrong.

The plainest examples seem to be assigning

We invent a way for AGIs to learn faster than humans40%
AGI inference costs drop below $25/hr (per human equivalent)16%

despite current models learning vastly faster than humans (training time of LLMs is not a human lifetime, and covers vastly more data) and the current nearing AGI and inference being dramatically cheaper and plummeting with algorithmic improvements. There is a general factor of progress, where progress leads to more progress, which you seem to be missing in the positive factors. For the negative, derailment that delays enough to push us out that far needs to be extreme, on the order of a full-out nuclear exchange, given more reasonable models of progress.

I'll leave you with Yud's preemptive reply:

Taking a bunch of number and multiplying them together causes errors to stack, especially when those errors are correlated.

Nice! Glad to see more funders entering the space, and excited to see the S-process rolled out to more grantmakers.

Added you to the map of AI existential safety:

Ops, forgot to share edit access, I sent you an invitation to the subfolder so you should be able to move it now. Can also copy if you'd prefer, but I think having one canonical version is best.

This seems super useful! Would you be willing to let Rob Miles's aisafety.info use this as seed content? Our backend is already in Google Docs, so if you moved those files to this drive folder we could rename them to have a question-shaped title and they'd be synced in and kept up to date by our editors, or we could copy these if you'd like to have your original separate.

I'm curious about what the thing you call EigenKarma is, is it the way people with more karma have more weighty votes? Or is it something with a global eigenvector?

I have put some thought into the privacy aspect, and there are ways to make it non-trivial or even fairly difficult to extract someone's trust graph, but nothing which actually hides it perfectly. That's why the network would have to be opt-in, and likely would not cover negative votes.

I'd be interested to hear the unpacked version of your worries about "gatekeeping, groupthink and polarisation".

Topically, this might be a useful part of a strategy to help the EA forum to stay focused on the most valuable things, if people had the option to sync their vote own history with the EigenKarma Network and use EKN lookup scores to influence the display and prioritization of posts on the front page. We'd be keen to collaborate with the EAF team to make this happen, if the community is excited.

Load more