You can give me anonymous feedback here: https://forms.gle/q65w9Nauaw5e7t2Z9
One example is the presence of staff that monitor all interactions in order to enforce certain norms. I've heard that they can seem a bit intimidating at times.I agree that transparency to the public is really lacking. I happen to know there is an internal justification for this opaqueness, but still believe that there are a lot more details they could be making public without jeopardizing their objectives.
One example is the presence of staff that monitor all interactions in order to enforce certain norms. I've heard that they can seem a bit intimidating at times.
I agree that transparency to the public is really lacking. I happen to know there is an internal justification for this opaqueness, but still believe that there are a lot more details they could be making public without jeopardizing their objectives.
The content in this comment seem really false to me, both in the actual statement and the "color" this comment has. It seems like it could mislead others who are less familiar with actual EAG events and other EA activities.
Below is object level content pushing back on the above thoughts.
Basically, it's almost physically impossible to monitor a large number of interactions, much less all interactions at EAG:
While less direct, here are anecdotes about EAG or CEA that seems to suggest an open, normal culture, or something:
Yes, Amy's comment is where I got my information/conclusion from.
Yes, you are right, the OP has commented to say she is open to EAGx, and based on this, my comment above about not liking EAGx does not apply.
This seems basic and wrong.
In the same way that two human super powers can't simply make a contract to guarantee world peace, two AI powers could not do so either.
(Assuming an AI safety worldview and the standard, unaligned, agentic AIs) in the general case, each AI will always weigh/consider/scheme at getting the other's proportion of control, and expect the other is doing the same.
based on their relative power and initial utility functions
It's possible that peace/agreement might come from some sort of "MAD" or game theory sort of situation. But it doesn't mean anything to say it will come from "relative power".
Also, I would be cautious about being too specific about utility functions. I think an AI's "utility function" generally isn't a literal, concrete, thing, like a Python function that gives comparisons , but might be far more abstract, and could only appear from emergent behavior. So it may not be something that you can rely on to contract/compare/negotiate.
I think the emotional cost of rejection is real and important. I think the post is about feeling like a member of a community, as opposed to acceptance at EAG itself.
It seems the OP didn't want to go to EAGx conferences. This wasn't mentioned in her OP.
Presumably, one reason the OP didn't want to go to EAGx, was that they view these events as diluted, or not having the same value as an EAG.
But that view seems contrary to wanting to expand from "elite", highly filtered EAGs. Instead, their choices suggests the issue is a personal one about fairness/meeting the bar for EAG.
The grandparent comment opens a thread criticizing eliteness or filtered EAG/CEA events. But that doesn't seem to be consistent with the above.
BTW, I think views where EAGx are "lesser" are disappointing, because in some ways, EAGx conferences have greater opportunities for counterfactuals (there are more liminal or nascent EAs).
EAG conference activity has grown dramatically, with EAGs now going over 1,500 people, and more EAG and EAGx conferences. Expenses and staff have all increased to support many more attendees.
The very CEA people who are responding here (and actively recruiting more people to get more/larger conferences), presided over this growth in conferences.
I can imagine that the increased size of EAGs faced some opposition. It's plausible to me that the CEA people here, actively fought for the larger sizes (and increased management/risk).
In at least a few views, this seems opposite to "eliteness" and seems important to notice/mention.
This is useful and thoughtful. I will read and will try to update on this (in general life, if not the forum?) Please continue as you wish!
I want to notify you and others, that I don't expect such discussion to materially affect any resulting moderator action, see this comment describing my views on my ban.
Below that comment, I wrote some general thoughts on EA. It would be great if people considered or debated the ideas there.
EA Common Application seems like a good idea
EA forum investment seems robustly good
Dangers from AI is real, moderate timelines are real
There is no substance behind “nanotech”, “intelligence explosion in hours” based narratives
It is remarkably bad that there hasn’t been any effective effort to recruit applied math talent from academia (even good students from top 200 schools would be formidable talent)
A major implementation of AI safety is very highly funded new EA orgs and this is close to an existential issue for some parts of EA
I think this can be addressed by monitoring of talent flows, funding and new organizations
There is a lack of forum discussion on effective animal welfare
Welfarism isn’t communicated well.
Patterns of communication in wild animal welfare and other areas aren’t ideal.
Weighting suffering by neuron count is not scientific - resolving this might be EA cause X
Many new institutions in EA animal welfare, which have languished from lack of attention, should be built.