It also appears that the link to ELK in this section is incorrect
Making use of an AI’s internal state,2 not just its outputs. For example, giving positive reinforcement to an AI when it seems likely to be “honest” based on an examination of its internal state (and negative reinforcement when it seems likely not to be). Eliciting Latent Knowledge provides some sketches of how this might look.
The link to ELK in this bullet point is broken.
It’s not currently clear how to find training procedures that train “giving non-deceptive answers to questions” as opposed to “giving answers to questions that appear non-deceptive to the most sophisticated human arbiters” (more at Eliciting Latent Knowledge).
It may intend to point to here: https://www.alignmentforum.org/posts/qHCDysDnvhteW7kRd/arc-s-first-technical-report-eliciting-latent-knowledge
This is cool, thanks for writing it!
I also recommend https://www.athenago.com for full time remote executive assistants
Also see https://www.utilitarianism.net/types-of-utilitarianism#multi-level-utilitarianism-versus-single-level-utilitarianism
For more on criterion of rightness vs decision procedure
I don't think my particular VAs have more capacity, but I believe Virtalent has other VAs ready to match with clients.
It is unclear to me whether I’ve just gotten lucky. But with Virtalent you can switch VA and the minimum commitment is very low, which is why I think the best strategy is just to try
I like the term "Summit"
Hey Theo - I’m James from the Global Challenges Project :)
Thanks so much for taking the time to write this - we need to think hard about how to do movement building right, and its great for people like you to flag what you think is going wrong and what you see as pushing people away.
Here’s my attempt to respond to your worries with my thoughts on what’s happening!
First of all, just to check my understanding, this is my attempt to summarise the main points in your post:
We’re missing out on great people as a result of how community building is going at student groups. A stronger version of this claim would be that current CB may be selecting against people who could most contribute to current talent bottlenecks. You mention 4 patterns that are pushing people away:
My understanding is that you find patterns (2) and (3) especially concerning. So to elaborate on them, you’re worried about:
You think these worrying patterns are being driven upstream by a strategic mistake of over-optimising for a metric of “highly engaged EAs”. This is a poor choice of metric because:
You then suggest some possible changes that student group leaders could make (here I’m just focusing on changes that SG leaders could do):
Sorry that was such a long summary (and if I missed out key parts, please do let me know)! I think you’re making many great points.
Here are some of my thoughts in reply:
Over-optimising on HEAs
Here are some of my thoughts on EA coming across as cult-like:
Other strategy suggestions which I think could improve the status quo:
Thanks again for taking the time to write the post - it seems like it's generated great discussion and that its something that a lot of people agree with :)
UK lawyers https://ignition.law/
Cool! Have you considered turning those notes into a post? Could be a great way for more people to see rhem