All of Jenny K E's Comments + Replies

This is a big part of why I used to basically not go to official in-person EA events (I do somewhat more often nowadays after having gotten more involved in EA, though still not a ton). It makes sense that EA events are like this, because after all, EA is the topic that all the people there have in common, but it does seem a bit unfortunate for those of us who like hanging out with EAs but aren't interested in talking about EA all the time. Maybe community event organizers should consider occasionally hosting EA events where EA is outright banned as a discussion topic, or if that's too extreme, maybe just events where there's some effort to create/amplify other discussion topics?

I think the separate Community tab is a great idea, thanks for implementing that!

Not about the current changes, but a bit of unrelated site feedback: The "library" button at the bottom of mobile leads to what seem to be a set of curated essays and sequences, which is good, but the sequences listed at the top are overwhelmingly on the topic of AI safety, which seems pretty unbalanced -- I'd like to see this tab contain a mix of curated reading recommendations on global poverty, animal welfare, biorisk, AI safety, and other cause areas.

Thank you for writing this! I've been somewhat skeptical that ATLAS is a good use of EA funding myself, but also don't know very much about it, so I appreciate someone who's more familiar with it and its fellows starting this conversation.

My fairly uninformed best guess is that the rumors listed here are a bit misleading / suggestive of problems being more extreme than they actually are, but that these problems do exist. But this is just a guess.

Thanks for writing this! I had that "eugh" feeling up until not that long ago, and it's nice to know other people have felt the same way. 

I'm particularly enthusiastic about more educational materials being created. The AGISF curriculum is good, and would have been very helpful to me if I'd encountered it at the right time. I'd be delighted to see more in that vein.

1
Emily Grundy
1y
I agree Jenny - I think educational materials, especially those that collate and then walk you through a range of concepts (like AGISF) are really useful. 

I learned about this ten months ago, personally, and (in an informal peer context) spoke to one of the people involved about it. The person in question defended the decision by saying they intended to run retreats and ask "Hamming questions". They added that the £15m was an investment, since the castle ("technically it's not a castle") wouldn't depreciate in value. Also, they opined that the EA community as a whole shouldn't have a veto on every large purchase, because consensus decision-making is infeasible on that scale and is likely to result in vetos f... (read more)

Jason
1y25
14
1

In response to the person's point about decisionmaking, there are ways to promote accountability to all donors, the community, and the general public without turning every decision into a referendum with veto power. Providing sufficiently detailed business justifications after the fact for purchases like this is one of them.

If "the general public" strikes a nerve with anyone, recall that the grantor's home country likely provided an indirect tax subsidy of several million pounds or equivalent on this. If one does not like public scrutiny, one does not have to apply for favored tax status. Then it would be none of the general public's business.

The gender identity question includes options that aren't mutually exclusive; I believe it should either be a checkbox question or should list something along the lines of "cisgender woman, transgender woman, cisgender man, transgender man, nonbinary, other." If you have more questions, feel free to PM me and I'm happy to do my best (as an ally) to answer them.

4
Ryan Fugate
1y
Thanks Jenny - I've just updated that question to be the multi-option checkbox format, appreciate the feedback!

As someone in a somewhat similar position myself (donating to Givewell, vegetarian, exploring AI safety work), this was nice to read. Diversifying is a good way to hedge against uncertainty and to exploit differing comparative advantages in different aspects of one's life.

Kelsey clarified in a tweet that if someone asks for a conversation to be off the record and she isn't willing to keep it off the record, she explicitly tells them so.

Presumably he made some unfounded assumptions about how sympathetic she'd be and whether she'd publish the screenshots, but never asked her not to.

[ETA: Whoops, realized this is answering a different question than the one the poster actually asked -- they wanted to know what individual community members can do, which I don't address here.]

Some concrete suggestions:

-Mandatory trainings for community organizers. This idea is lifted directly from academia, which often mandates trainings of this sort. The professional versions are often quite dumb and involve really annoying unskippable videos; I think a non-dumb EA version would encourage the community organizer to read the content of the community heal... (read more)

1
Ula Zarosa
1y
1. Training seems to me like a good idea, if it can be online (to a large group - e.g. all organizers) very specific, so as you mentioned: if this situation occurs -> do this (e.g. if a person reports mistreatment of this sort, we do XYZ),  free, and mandatory, it could be very helpful.  2. The centralized page or e.g., an add-on/button in Swapcard. 3. It should be announced in the intro speech who these designated people are (it should be 1 male and 1 female member), and I saw a great idea in EAGxPrague, where they put the photos and contact details to their community health people in the bathrooms (among other places), for the situation when someone runs there cause they are anxious, overwhelmed, etx.  4. I agree telling people to just improve their behavior (especially with solid portions of the community being people with poorer social skills) will definitely not work. 

I think maybe that the balance I'd strike here is as follows: we always respect nonintervention requests by victims. That is if the victim says "I was harmed by X, but I think the consequences of me reporting this should not include consequence Y" then we avoid intervening in ways that will cause Y. This is a good practice generally, because you never want to disincentivize people from reporting by making it so that them reporting has consequences they don't want. Usually the sorts of unwanted consequences in question are things like "I'm afraid of backlas... (read more)

2
Davit Jintcharadze
1y
This is a valid consideration, however, one could argue that if we were to give victims the option to opt out of the specific consequence that might have been crucial in preventing future wrongdoings by the same person or other people, then perpetrators would think they can still carry on with their behavior. Especially if the victim decides to opt the perpetrator out of all serious consequences. It also could be the case that victims that are affected by what happened to them psychologically might not be able to make an informed judgment of consequences at that very moment,  as we know everyone has their own time frame of processing the wrongdoing that was done to them. 
3
Guy Raveh
1y
And, after a while, also people who aren't yet victims but know how the community will act (or fall to act) if they become ones, so they just opt out preemptively.

Yes, I agree with what you've written here. "This comes from a place of hurt" was actually meant as hedging/softening; "because you have had bad experiences it makes sense for your post to be angry and emotionally charged and it should not be held to the same epistemic standards as a typical EA Forum post on a less personal issue." Sorry that wasn't clear.

My response was based on my impressions from several years being a woman in EA circles, which are that these issues absolutely do exist and affect an unfortunately high number of women to various extents,... (read more)

Yep, you are totally right about availability bias and I don't mean to downplay at all your experience -- that's awful and I'd be delighted to see more efforts by EA groups to prevent this sort of thing.

And yeah, if you don't feel like optimizing for argumentative quality that's valid and my comment isn't worth minding in that case! Not your job to fix these issues, and thank you for taking the time to bring awareness.

4
Keerthana Gopalakrishnan
1y
:)

[Epistemic status: I've done a lot of thinking about these issues previously; I am a female mathematician who has spent several years running mentorship/support groups for women in my academic departments and has also spent a few years in various EA circles.]

I wholeheartedly agree that EA needs to improve with respect to professional/personal life mixing, and that these fuzzy boundaries are especially bad for women. I would love to see more consciousness and effort by EA organizations toward fixing these and related issues. In particular I agree with the f... (read more)

J
1y20
5
0

When you say that the author’s experience is uncommon, what evidence do you draw on? Having been an event organizer to whom some women feel safe reporting, I have heard a few reports of similar nature. That said, even one case is one too many.

To some experiences, anger is an appropriate and healthy response. When you say that this post “comes from a place of hurt”, it sounds as if you’re positioning that as a reason to criticize it. I’m worried that this raises an unreasonable standard for reports of harm. By the nature of the matter, victims have feelings... (read more)

Availability bias informed by personal experience affects our perception of rate of incidence a lot. So I added this stat.

"Edit: I have personally experienced this more than three times in less than one year of attending EA events and that is far too many times."

 I have two other female friends I talk to who are not ready to speak up yet who were involved longer and report higher numbers. 

Also, the post is not optimized  for analytical/argumentative quality. My only goal is to speak my mind, my authentic experience and bring awareness that t... (read more)

I'd suggest "LEA," which is almost as easy to type as EA.

Thanks so much for writing this. As someone interested in starting to do community building at a university, this was helpful to read, especially the Alice/Bob example and the concrete advice. I do really think that EA could stand to be less big on recruiting HEAs. I think there are tons of people who are interested in EA principles but aren't about to make a career switch, and it's important for those people to feel welcome and like they belong in the community.

I was going to write "I kind of wish this post (or a more concise version) were required readin... (read more)

Elaborating on this, thanks to Spencer Becker-Kahn for prompting me to think about this more:

From a standpoint of my values and what I think is good, I'm an EA. But doing intellectual work, specifically, takes more than just my moral values. I can't work on problems I don't think are cool. I mean, I have, and I did, during undergrad, but it was a huge relief to be done with it after I finished my quals and I have zero desire to go back to it. It would be -- at minimum unsustainable -- for me to try to work on a problem where my main motivation for doing it... (read more)

Thanks very much for the suggestions, I appreciate it a lot! Zoom In was a fun read -- not very math-y but pretty cool anyway. The Transformers paper also seems kind of fun. I'm not really sure whether it's math-y enough for me to be interested in it qua math...but in any event it was fun to read about, which is a good sign. I guess "degree of mathiness" is only one neuron of several neurons sending signals to the "coolness" layer, if I may misuse metaphors.

The point about checking back in every now and then is a good one; I had been thinking in more binary terms and it's helpful to be reminded that "not yet, maybe later" is also a possible answer to whether to do AI safety research.

I like logic puzzles, and I like programming insofar as it's like logic puzzles. I'm not particularly interested in e.g. economics or physics or philosophy. My preferred type of problem is very clear-cut and abstract, in the sense of being solvable without reference to how the real world works. More "is there an algorithm with tim... (read more)

These are the sort of thing I'm looking for! In that, on first glance, they're a lot of solid "maybe"s where mostly I've been finding "no"s. So that's encouraging --thank you so much for the suggestions!

I am not intellectually motivated by things on the basis of their impactfulness. If I were, I wouldn't need to ask this question.

1
Jenny K E
2y
Elaborating on this, thanks to Spencer Becker-Kahn for prompting me to think about this more: From a standpoint of my values and what I think is good, I'm an EA. But doing intellectual work, specifically, takes more than just my moral values. I can't work on problems I don't think are cool. I mean, I have, and I did, during undergrad, but it was a huge relief to be done with it after I finished my quals and I have zero desire to go back to it. It would be -- at minimum unsustainable -- for me to try to work on a problem where my main motivation for doing it is "it would be morally good for me to solve this." I struggle a bit with motivation at the best of times, or rather, on the best of problems. So, if I can find something in AI safety that I think is approximately as cool as what I'm currently doing, I'll do it, but the coolness is actually a requirement, because I won't be successful or happy otherwise. I'm not built for it (and I think most EAs aren't; fortunately some of them have different tastes than I do, as to what is or isn't cool).  

Absolutely agree with everything you've said here! AI safety is by no means the only math-y impactful work.

Most of these don't quite feel like what I'm looking for, in that the math is being used to do something useful or valuable but the math itself isn't very pretty. "Racing to the Precipice" looks closest to being the kind of thing I enjoy.

Thank you for the suggestions!

Your points (1) and (2) are ones I know all too well, though it was quite reasonable to point them out in case I didn't, and they may yet prove helpful to other readers of this post.

Regarding Vanessa Kosoy's work, I think I need to know more math to follow it (specifically learning theory, says Ben; for the benefit of those unlucky readers who are not married to him, he wrote his answer in more detail below). I did find myself enjoying reading what parts of the post I could follow, at least.

Regarding the Topos Institute, someone I trust has a low opinion of them; epistemic status secondhand and I don't know the details (though I intend to ask about it).

Thanks very much for the suggestions!

Ah, that's a good way of putting it! I'm much more of a "problem solver."

Cool!

My opinionated takes for problem solvers:

(1) Over time we'll predictably move in the direction from "need theory builders" to "need problem solvers", so even if you look around now and can't find anything, it might be worth checking back every now and again.

(2) I'd look at ELK now for sure, as one of the best and further-in-this-direction things.

(3) Actually many things have at least some interesting problems to solve as you get deep enough. Like I expect curricula teaching ML to very much not do this, but if you have mastery of ML and are trying to a... (read more)

The second and third strike me as useful ideas and kind of conceptually cool, but not terribly math-y; rather than feeling like these are interesting math problems, the math feels almost like an afterthought. (I've read a little about corrigibility before, and had the same feeling then.) The first is the coolest, but also seems like the least practical -- doing math about weird simulation thought experiments is fun but I don't personally expect it to come to much use.

Thank you for sharing all of these! I sincerely appreciate the help collecting data about how existing AI work does or doesn't mesh with my particular sensibilities.

8
Owen Cotton-Barratt
2y
To me they feel like pre-formal math? Like the discussion of corrigibility gives me a tingly sense of "there's what on the surface looks like an interesting concept here, and now the math-y question is whether one can formulate definitions which capture that and give something worth exploring". (I definitely identify more with the "theory builder" of Gower's two cultures.)
2
Max_Daniel
2y
Maybe the notes on 'ascription universality' on ai-alignment.com are a better match for your sensibilities.

My favorite fields of math are abstract algebra, algebraic topology, graph theory, and computational complexity. The latter two are my current research fields. This may seem to contradict my claim of being a pure mathematician, but I think my natural approach to research is a pure mathematician's approach, and I have on many occasions jokingly lamented the fact that TCS is in the CS department, instead of in the math department where it belongs. (This joke is meant as a statement about my own preferences, not a claim about how the world should be.)

Some exa... (read more)

2
Mo Putera
2y
I'm guessing you've already made up your mind on this since it's been a few months, but since you mentioned computational complexity being your research field you might be interested to know that Scott Aaronson was persuaded by Jan Leike to spend a year at OpenAI to (Scott admitted, like you, that he basically needed to be nerd-sniped into working on problems; "this is very important so you must work on it" doesn't work in practice.) Quoting Scott a bit more (and adding bullets): That said, these mostly lean towards theory-builders, and you mentioned upthread being more problem-solver-oriented, so they probably aren't as interesting.