Charles He

2167Joined Jan 2021

Bio

You can give me anonymous feedback here: https://forms.gle/q65w9Nauaw5e7t2Z9

Posts
1

Sorted by New

Comments
782

One example is the presence of staff that monitor all interactions in order to enforce certain norms. I've heard that they can seem a bit intimidating at times.

I agree that transparency to the public is really lacking. I happen to know there is an internal justification for this opaqueness, but still believe that there are a lot more details they could be making public without jeopardizing their objectives. 

The content in this comment seem really false to me, both in the actual statement and the "color" this comment has. It seems like it could mislead others who are less familiar with actual EAG events and other EA activities. 

Below is object level content pushing back on the above thoughts.

 

Basically,  it's almost physically impossible to monitor a large number of interactions, much less all interactions at EAG: 

  • Most meetings are 1on1s that are privately arranged, and there's many thousands of these meetings at every conference. Some meetings occur in scheduled events  (e.g. speed meetings for people interested in a certain topic).
    • It's not possible that CEA staff could physically hover over in all person meetings, I don't think there's enough staff to cover all centrally organized events (trained volunteers are used instead).
    • Also, if someone tried to eavesdrop in this way, it would be immediately obvious (and seem sort of clownishly absurd). 
  • In all venues, there is "great diversity" of the physical environments people could meet.
    • This includes large, open standing areas, rooms of small or medium size, booths, courtyards. 
    • This includes the ability to walk the streets surrounding the venue (which can be useful for sensitive conversations). 
    • By the way, providing this diversity is intentionally done by the organizers.
  • CEA staff do not control/own the conference venue (they rent and deal with venue staff, who generally are present constantly).
  • It seems absurd to write this, but covert monitoring of private conversations is illegal, and there's literally hundreds of technical people at EA conferences, and I don't think this would go undetected for long.

 

While less direct, here are anecdotes about EAG or CEA that seems to suggest an open, normal culture, or something:

  • At one EAGx, the literal conference organizers and leader(s) of the country/city EA group were longtime EAs, who actively expressed dislike of CEA, due to its bad "pre-Dalton era" existence (before 2019)
    • The fact that they communicated their views openly and still lead a EAGx and enjoy large amounts of CEA funding/support, seems healthy and open.
  • Someone I know has been approached multiple times at EA conferences  by people who are basically "intra-EA activists", for example, who want different financing and organizing structures, and are trying to build momentum. 
    • The way they approached seemed pretty open, e.g. the place they wanted to meet  was public and they spoke reasonably loudly and directly
    • By the way, some of these people are employed by the canonical EA organizations or think tanks, e.g. they literally have physical offices not too far next to some of the major, major EA figures. 
      • These people shared many details and anecdotes, some of which are hilarious.
      • Everything about these interactions and the existence of these people suggests openness in EA in general
  • On various matters, CEA staff don't agree with other CEA staff, like all normal, healthy organizations with productive activities and capable staff. The fact these disputes exists sort of "interrogates the contours" of the culture at CEA and seems healthy. 
  • It might be possible and useful to quantify decline in forum quality (measurement is hard it seems plausible to use engagement with promising or established users, and certain voting patterns might be a mark for quality).
  • I think the forum team should basically create/find content, for example by inviting guest writers. The resulting content would be good themselves and in turn might draw in high quality discussion. This would be particularly useful in generating high quality object level discussion.

Yes, Amy's comment is where I got my information/conclusion from. 

Yes, you are right, the OP has commented to say she is open to EAGx, and based on this, my comment above about not liking EAGx does not apply.

This seems basic and wrong. 

In the same way that two human super powers can't simply make a contract to guarantee world peace, two AI powers could not do so either. 

(Assuming an AI safety worldview and the standard, unaligned, agentic AIs) in the general case, each AI will always weigh/consider/scheme at getting the other's proportion of control, and expect the other is doing the same.

 

based on their relative power and initial utility functions

It's possible that peace/agreement might come from some sort of  "MAD" or game theory sort of situation. But it doesn't mean anything to say it will come from "relative power". 

Also, I would be cautious about being too specific about utility functions. I think an AI's "utility function" generally isn't a literal, concrete, thing, like a Python function that gives comparisons , but might be far more abstract, and could only appear from emergent behavior. So it may not be something that you can rely on to contract/compare/negotiate. 

I think the emotional cost of rejection is real and important. I think the post is about feeling like a member of a community, as opposed to acceptance at EAG itself.

 

It seems the OP didn't want to go to EAGx conferences. This wasn't mentioned in her OP.

Presumably, one reason the OP didn't want to go to EAGx, was that they view these events as diluted, or not having the same value as an EAG[1]

But that view seems contrary to wanting to expand from "elite", highly filtered EAGs. Instead, their choices suggests the issue is a personal one about fairness/meeting the bar for EAG.

 

The grandparent comment opens a thread criticizing eliteness or filtered EAG/CEA events. But that doesn't seem to be consistent with the above.

  1. ^

    BTW, I think views where EAGx are "lesser" are disappointing, because in some ways, EAGx conferences have greater opportunities for counterfactuals (there are more liminal or nascent EAs).

EAG conference activity has grown dramatically, with EAGs now going over 1,500 people, and more EAG and EAGx conferences. Expenses and staff have all increased to support many more attendees.

The very CEA people who are responding here (and actively recruiting more people to get more/larger conferences), presided over this growth in conferences. 

I can imagine that the increased size of EAGs faced some opposition. It's plausible to me that the CEA people here, actively fought for the larger sizes (and increased management/risk). 

In at least a few views, this seems opposite to "eliteness" and seems important to notice/mention.

This is useful and thoughtful. I will read and will try to update on this (in general life, if not the forum?) Please continue as you wish!

I want to notify you and others, that I don't expect such discussion to materially affect any resulting moderator action, see this comment describing my views on my ban.

Below that comment, I wrote some general thoughts on EA. It would be great if people considered or debated the ideas there.

EA Common Application seems like a good idea

  • I think a common application seems good and  to my knowledge, no one I know is working on a very high end, institutional version
  • See something written up here
     

EA forum investment seems robustly good

  • This is one example ("very high quality focus posts")
    • This content empowers the moderator to explore any relevant idea, and cause thousands of people, to learn and update on key EA thought, and develop object level views of the landscape. They can stay grounded. 
    • This can justify a substantial service team, such as editors and artists, who can illustrate posts or produce other design

AI Safety

Dangers from AI is real, moderate timelines are real

  • AI alignment is a serious issue, AIs can be unaligned and dominate humans, for the reasons most EA AI safety people say it does
  • One major objection, that severe AI danger correlates highly with intractability, is powerful
    • Some vehement neartermists actually believe in AI risk but don’t engage because of tractability
  • Another major objection of AI safety concerns, that seems very poorly addressed, is AI competence in the real world. This is touched on here and here.
    • This seems important, but relying on a guess that AGI can’t navigate the world, is bad risk management
  • Several lock-in scenarios fully justify neartermist work.
    • Some considerations in AI safety may even heavily favor neartermist work (if AI alignment tractability is low and lock in is likely and this can occur fairly soon)

 

There is no substance behind “nanotech”, “intelligence explosion in hours” based narratives

  • These are good as theories/considerations/speculations, but their central place is very unjustifiable
  • They expose the field to criticism and dismissal by any number of scientists (skeptics and hostile critics  outnumber senior EA AI safety people, which is bad and recent trends are unpromising)
    • This slows progress. It’s really bad these suboptimal viewpoints have existed for so long, and damages the rest of EA

 

It is remarkably bad that there hasn’t been any effective effort to recruit applied math talent from academia (even good students from top 200 schools would be formidable talent)

  • This requires relationship building and institutional knowledge (relationships with field leaders, departments and established professors in applied math/computer science/math/economics/others
  • Taste and presentation is big
    • Probably the current choice of math and theories around AI safety or LessWrong, are quaint and basically academic poison
      • For example, acausal work is probably pretty bad
      • (Some chance they are actually good, tastes can be weird)
    • Fixation on current internal culture is really bad for general recruitment 
  • Talent pool may be 5x to 50x greater with effective program 

 

A major implementation of AI safety is very highly funded new EA orgs and this is close to an existential issue for some parts of EA

  • Note that (not yet stated) critiques of these organizations, like “spending EA money” or “conflict of interest” aren’t valid
    • They are even counterproductive, for example, because closely knit EA leaders are the best talent, and they actually can make profit back to EA (however which produces another issue)
  • It’s fair to speculate that they will perform two key traits/activities, probably not detailed in their usually limited public communications:
    • Often will attempt to produce AGI outright or find explore/conditions related to it
    • Will always attempt to produce profit
    • (These are generally prosocial)
  • Because these orgs are principled, they will hire EAs for positions whenever possible, with compensation, agency and culture that is extremely high
    • This has major effects on all EA orgs
  • A concern is that they will not achieve AI safety, AGI, and the situation becomes one where EA gets caught up creating rather prosaic tech companies
    • This could result in a bad ecosystem and bad environment (think of a new season of the Wire, where ossified patterns of EA jargon cover up pretty regular shenanigans)
    • So things just dilute down to profit seeking tech companies. This seems bad:
      • In one case, speaking to an AI safety person, they brought up donations  from their org, casually by chance.  The donation amount was large compared to EA grants.
        • It's problematic if ancillary donations by EA orgs are larger than EA grantmakers.
      • The "constant job interview" culture of some EA events and interactions will be made worse
    • Leadership talent from one cause area may gravitate to middle management in another—this would be very bad
  • All these effects can actually be positive, and these orgs and cultures can strengthen EA.
     

I think this can be addressed by  monitoring of talent flows, funding and new organizations

  • E.g. a dedicated FTE, with a multi year endowment, monitors talent in EA and hiring activity
    • I think this person should be friendly (pro AI safety), not a critic
      • They might release reports showing how good the orgs are and how happy employees are
    • This person can monitor and tabulate grants as well, which seems useful.
    • Sort of a census taker or statistician

Animal welfare 

There is a lack of forum discussion on effective animal welfare

  • This can be improved with the presence of people from the main larger EA animal welfare orgs

Welfarism isn’t communicated well

  • Welfarism observes the fact that suffering is enormously unequal among farmed animals, with some experiencing very bad lives
  • It can be very effective to alter this and reduce suffering, compared to focusing on removing all animal products at once
  • This idea is well understood and agreed upon by animal welfare EAs
  • While welfarism may need critique (which it will withstand as it’s substantive as impartialism), its omission is distorting and wasting thinking, in the same way the omission of impartialism would
    • Anthropomorphism is common (discussions contain emotionally salient points, that are different than what fish and land animal welfare experts focus on )
    • Reasoning about prolonged, agonizing experiences is absent (it’s understandably very difficult), yet is probably the main source of suffering.
       

Patterns of communication in wild animal welfare and other areas aren’t ideal.

  • It should be pointed out that this work involves important foundational background research. Addressing just the relevant animals in human affected environments could be enormously valuable.
  • In conversations that are difficult or contentious with otherwise altruistic people, it might be useful to be aware of the underlying sentiment where people feel pressured or are having their morality challenged.
    • Moderation of views and exploration is good, and pointing out one's personal history in more regular animal advocacy and other altruistic work is good.
    • Sometimes it may be useful to avoid heavy use of jargon, or applied math that might be seen as undue or overbearing.
  • A consistent set of content (web pages seem to be good).
    • Showing upcoming work would be good in Wild Animal Welfare, such as explaining foundational scientific work

 

Weighting suffering by neuron count is not scientific - resolving this might be EA cause X

  • EAs often weight by neuron count, as a way to calculate suffering. This has no basis in science. There are reasons (not solved or concrete unfortunately) to think smaller animals (mammals and birds) can have similar level of feelings of pain or suffering as humans.
  • To calibrate, I think most or all animal welfare EAs, as well as many welfare scientists would agree that simple neuron count weighting is primitive or wrong
  • Weighting by neuron count has been necessary because it’s very difficult to deal with the consequences of not weighting
  • Weighting by neuron counts is almost codified—its use turns up casually, probably because omitting it is impractical (emotionally abhorrent)
    • Because it's blocked for unprincipled reasons, this could probably be “cause X”
    • The alleviation of suffering may be tremendously greater if we remove this artificial and maybe false modifier, and take appropriate action with consideration of the true experiences of the sentient beings. 
  • The considerations about communication and overburdening people apply, and a conservative approach would be good
    • Maybe driving this issue starting from prosaic, well known animals is a useful tactic

 

Many new institutions in EA animal welfare, which have languished from lack of attention, should be built.

  • (Theres no bullet points here)
Load More