All of Elityre's Comments + Replies

@Kat Woods 

I'm trying to piece together a timeline of events. 

You say in the evidence doc that

days after starting at Nonlinear, Alice left to spend a whole month with her family. We even paid her for 3 of the 4 weeks despite her not doing much work. (To be fair, she was sick.)

Can you tell me what month this was? Does this mean just after she quit her previous job or just after she started traveling with you?

7
Kat Woods
3mo
Late February to late March.  She'd quit her previous job a while back. 

FWIW, that was not obvious to me on first reading, until the comments pointed it out to me.

Mostly I find it ironic, given that Ben says his original post was motivated by a sense that there was a pervasive silencing effect, where people felt unwilling to share their negative experiences with Nonlinear for fear of reprisal.

Why might humans evolve a rejection of things that taste to sweet? What fitness reducing thing does "eating oversweet things" correlate with? Or is it a spandrel of something else?

If this is true, it's fascinating, because it suggest that our preference for cold and carbonation are a kind of specification gaming!

 

Ok. Given all that, is there particular thing that you wish Ben (or someone) had done differently here? Or are you mostly wanting point out the dynamic?

I want to try to paraphrase what I hear you saying in this comment thread, Holly. Please feel free to correct any mistakes or misframings in my paraphrase.

I hear you saying...

  • Lightcone culture has a relatively specific morality around integrity and transparency. Those norms are consistent, and maybe good, but they're not necessarily shared by the EA community or the broader world.
  • Under those norms, actions like threatening your ex-employees's carrer prospects to prevent them from sharing negative info about you are very bad, while in broader culture a "you
... (read more)
2
Holly_Elmore
6mo
Yes, very good summary!
Elityre
7mo22
14
4
1
1

Crostposted from LessWong (link)

Maybe I'm missing something, but it seems like it should take less than an hour to read the post, make a note of every claim that's not true, and then post that list of false claims, even if it would take many days to collect all the evidence that shows those points are false.

I imagine that would be helpful for you, because readers are much more likely to reserve judgement if you listed which specific things are false. 

Personally, I could look over that list and say "oh yeah, number 8 [or whatever] is cruxy for me. If t... (read more)

(2) I think something odd about the comments claiming that this post is full of misinformation, is that they don't correct any of the misinformation. Like, I get that assembling receipts, evidence etc can take a while, and writing a full rebuttal of this would take a while. But if there are false claims in the post, pick one and say why it's false. 

Seconding this. 

I would be pretty interested to read a comment from nonlinear folks listing out everything that they believe to be false in the narrative as stated, even if they can't substantiate their counter-claims yet.

I agree that if it were just a few disputed claims that would be a a reasonable thing to do, there are so many. And there is so much nuance.

Here is one example, however. This took us hours to prepare, just to rebut a single false claim:

https://forum.effectivealtruism.org/posts/5pksH3SbQzaniX96b/a-quick-update-from-nonlinear

I recommend that you use a spoiler tag for that last part. Not everyone who wants to has finished the story!

-1
burner
7mo
Edited, thank you!

I imagine that most of the disagreement is with (implied, but not stated) conditional "that Owen did this means that decent men don't exist".

I want to know if you can find more people companies that have experienced a similar thing with the FDA. 

Is there a reddit or discussion forum where people discuss and commiserate about FDA threats like this one? Can you find people there, and then verify that they / their experiences are real?

As a naive outsider, it seems to me like all of the specific actions you suggest would be stronger and more compelling if you can muster a legitimate claim that this is a pattern of behavior and not just a one-off. An article with one source making an accusation... (read more)

5
vaniver
2y
Is this the case? Often the reaction to the 'first transgression' will determine whether or not to do future ones--if people let it slide, then probably they don't care that much, whereas if they react strongly, it's important to repent and not do again. And when there are patterns of behavior, especially in cases with significant power dynamics, it seems unlikely that you'd be able to collect such stories (in a usable way) without there being a prominent example of someone who shared their story and it went well for them. 
I know that I was wrong because people of the global majority continuously speak in safe spaces about they feel unsafe in EA spaces. They speak about how they feel harmed by the kinds of things discussed in EA spaces. And they speak about how there are some people — not everyone, but some people — who don’t seem to be just participating in open debate in order to get at the truth, but rather seem to be using the ideals of open discussion to be a cloak that can hide their intent to do harm.

I'm not sure what to say to this.

A... (read more)

Some speech is harmful. Even speech that seems relatively harmless to you might be horribly upsetting for others. I know this firsthand because I’ve seen it myself.

I want to distinguish between "harmful" and "upsetting". It seems to me that there is a big difference between shouting 'FIRE' in a crowed theater, "commanding others to do direct harm" on the one hand, and "being unable to focus for hours" after reading a facebook thread, being exhausted from fielding questions.

My intuitive grasp of the... (read more)

4
EricHerboso
4y
We agree here that if something is bad for you, you can just not go into the place where that thing is. But I think this is argument in favor of my position: that there should be EA spaces where people like that can go and discuss EA-related stuff. For example, some people have to go to the EAA facebook thread as a part of their job. They are there to talk about animal stuff. So when people come into a thread about how to be antiracist while helping animals and decide to argue vociferously that racism doesn't exist, that is just needlessly inappropriate. It's not that the issue shouldn't ever be discussed; it's that the issue shouldn't be discussed there, in that thread. We should allow people to be able to work on EA stuff without having to be around the kind of stuff that is bad for them. If they feel unable to discuss certain topics without feeling badly, let them not go into threads on the EA forum that discuss those topics. This we agree on. But then why say that we can't have a lesser EA space (like an EA facebook group) for them where they can interact without discussion on the topics that make them feel badly? Remember, some of these people are employees whose very job description may require them to be active on the EAA facebook group. They don't have a choice here; we do.

I think this comment says what I was getting at in my own reply, though more strongly.

First of all, I took this comment to be sincere and in the spirit of dialog. Thank you and salutations.

[Everything that I say in this comment is tentative, and I may change my mind.]

Surely there exists a line at which we agree on principle. Imagine that, for example, our EA spaces were littered with people making cogent arguments that steel manned holocaust denial, and we were approached by a group of Jewish people saying “We want to become effective altruists because we believe in the stated ideals, but we don’t feel safe participating
... (read more)
7
EricHerboso
4y
We are agreed that truth is of paramount importance here. If a true conclusion alienates someone, I endorse not letting that alienation sway us. But I think we disagree on two points: 1. I believe diversity is a serious benefit. Not just in terms of movement building, but in terms of arriving at truth. Homogeneity breeds blind spots in our thinking. If a supposed truth is arrived at, but only one group recognizes it as truth, doesn’t that make us suspect whether we are correct? To me, good truth-seeking almost requires diversity in several different forms. Not just philosophical diversity, but diversity in how we’ve come up in the world, in how we’ve experienced things. Specifically including BIPGM seems to me to very important in ensuring that we arrive at true conclusions. 2. I believe the methods of how we arrive at true conclusions doesn’t need to be Alastair Moody-levels of constant vigilance. We don’t have to rigidly enforce norms of full open debate all the time. I think the latter disagreement we have is pretty strong, given your willingness to bite the bullet on holocaust denial. Sure, we never know anything for sure, but when you get to a certain point, I feel like it’s okay to restrict debate on a topic to specialized places. I want to say something like “we have enough evidence that racism is real that we don’t need to discuss it here; if you want to debate that, go to this other space”, and I want to say it because discussing racism as though it doesn’t exist causes a level of harm that may rise to the equivalent to physical harm in some people. I’m not saying we have to coddle anyone, but if we can reduce that harm for almost no cost, I’m willing to. To me, restricting debate in a limited way on a specific Facebook thread is almost no cost. We already restrict debate in other, similar ways: no name calling, no doxxing, no brigading. In the EAA FB group, we take as a given that animals are harmed and we should help them. We restrict debate on that t

I don't follow how what you're saying is a response to what I was saying.

I think a model by which people gradually "warm up" to "more advanced" discourse norms is false.

I wasn't saying "the point of different discourse norms in different EA spaces is that it will gradually train people into more advanced discourse norms." I was saying if that I was mistaken about that "warming up effect", it would cause me to reconsider my view here.

In the comment above, I am only saying that I think it is a mistake to have different discourse norms at the core vs. the periphery of the movement.

I think there is a lot of detail and complexity here and I don't think that this comment is going to do it justice, but I want to signal that I'm open to dialog about these things.

For example, allowing introductory EA spaces like the EA Facebook group or local public EA group meetups to disallow certain forms of divisive speech, while continuing to encourage serious open discussion in more advanced EA spaces, like on this EA forum.

On the face of it, this seems like a bad idea to me. I don't want "introductory" EA spaces to have ... (read more)

Surely there exists a line at which we agree on principle. Imagine that, for example, our EA spaces were littered with people making cogent arguments that steel manned holocaust denial, and we were approached by a group of Jewish people saying “We want to become effective altruists because we believe in the stated ideals, but we don’t feel safe participating in a space where so many people commonly and openly argue that the holocaust did not happen.”

In this scenario, I hope that we’d both agree that it would be appropriate for u... (read more)

"I think a model by which people gradually "warm up" to "more advanced" discourse norms is false."

I don't think that's the main benefit of disallowing certain forms of speech at certain events. I'd imagine it'd be to avoid making EA events attractive and easily accessible for, say, white supremacists. I'd like to make it pretty costly for a white supremacist to be able to share their ideas at an EA event.

On the most crucial topics, and in capturing the nuance and complexity of the real world, this piece fails again and again: epistemic overconfidence plus uncharitable disdain for the work of others, spread thinly over as many topics as possible.

Interestingly, this reminds me of Nassim Nicholas Taleb.

Another thing for people to keep in mind:

Apparently, if you want loan forgiveness, you can only spend 8 weeks worth of the money on payroll.

From here,

If you’re a sole proprietor, you can have eight weeks of the loan forgiven as a replacement for lost profit. But you’ll need to provide documentation for the remaining two weeks worth of cash flow, proving you spent it on mortgage interest, rent, lease, and utility payments.

So if at some point you need to check boxes saying what you're applying for this loan for, and you can check more ... (read more)

I recommend that everyone who is eligible apply through US Bank ASAP.

Other lenders might still work, but US Bank was by far the fastest. A person that I was coaching through this process and I both received our loans within 4 days of initially filling out their application (I say "initially" because there were several steps where they needed additional info).

Also, we now know that the correct answer to how many employees you have is "0 employees, it's just me", not "1 employee, because I employ myself."

An email I received from Bench reads: "If your bank isn’t participating, your next best option is to apply through Fundera—they will match you with the best lender."

However, when I tried to fill out their application, they asked me to upload...

  • a business bank statement,
  • a copy of my drivers license,
  • proof of payroll (IRS Form 941),
  • and voided a business check,

...of which I have only one out of three.

2
EdoArad
4y
I think that the HowieL did not close the square bracket (but then edited so that it now looks fine).

I think a lot of this is right and important, but I especially love:

Don't let the fact that Bill Gates saved a million lives keep you from saving one.

We're all doing the best we can with the privileges we were blessed with.

I like the breakdown of those two bullet points, a lot, and I want to think more about them.

Both of these I think are fairly easily measurable from looking at someone's past work and talking to them, though.

I bet that you could do that, yes. But that seems like a different question than making a scalable system that can do it.

In any case, Ben articulates the view that generated the comment above, above.

[Edit: it'd be very strange if we end up preferring candidates who hadn't thought about AI at all to candidates who had thought some about AI but don't have specific plans for it.]

That doesn't seem that strange to me. It seems to mostly be a matter of timing.

Yes, eventually we'll be in an endgame where the great powers are making substantial choices about how powerful AI systems will be deployed. And at that point I want the relevant decision makers to have sophisticated views about AI risk and astronomical stakes.

But in the the... (read more)

This is a quote from somewhere? From where?

2
Hauke Hillebrandt
5y
Sorry if that was unclear, but it's the title of the paper by Einstein: https://journals.aps.org/pr/abstract/10.1103/PhysRev.47.777 It's also known as the Paradox paper- which is where you might know it from: https://en.wikipedia.org/wiki/EPR_paradox

At the moment, not really.

There's the classic Double Crux post. Also, here's a post I wrote, that touches on one sub-skill (out of something like 50 to 70 sub-skills that I currently know). Maybe it helps give the flavor.

If I were to say what I'm trying to do in a sentence: "Help the participants actually understand eachother." Most people generally underestimate how hard this is, which is a large part of the problem.

The good thing that I'm aiming for in a conversation is when "that absurd / confused thing that X-person... (read more)

5
Moses
5y
Yes, that helps, thanks. "Mediating" might be a word which would convey the idea better.
I really admire the level of detail and transparency going into these descriptions, especially those written by Oliver Habryka

Hear, hear.

I feel proud of the commitment to epistemic integrity that I see here.

[Are there ways to delete a comment? I started to write a comment here, and then added a bit to the top-level instead. Now I can't make this comment go away?]

A small correction:

Facilitating conversations between top people in AI alignment (I’ve in particular heard very good things about the 3-day conversation between Eric Drexler and Scott Garrabrant that Eli facilitated)

I do indeed facilitate conversations between high level people in AI alignment. I have a standing offer to help with difficult conversations / intractable disagreements, between people working on x-risk or other EA causes.

(I'm aiming to develop methods for resolving the most intractable disagreements in the space. The more direct experien... (read more)

3
Habryka
5y
Will update to say "help facilitate". Thanks for the correction!
3
Moses
5y
Is there any resource (eg blogpost) for people curious about what "facilitating conversations" involves?
0
Elityre
5y
[Are there ways to delete a comment? I started to write a comment here, and then added a bit to the top-level instead. Now I can't make this comment go away?]

(Eli's personal notes, mostly for his own understanding. Feel free to respond if you want.)

1. It seems pretty likely that early advanced AI systems won't be understandable in terms of HRAD's formalisms, in which case HRAD won't be useful as a description of how these systems should reason and make decisions.

My current guess is that the finalized HRAD formalisms would be general enough that they will provide meaningful insight into early advanced AI systems (even supposing that the development of those early systems is not influenced by HRAD ideas), in much the same way that Pearlean causality and Bayes nets gives (a little) insight into what neural nets are doing.

I'm not sure I follow. The question asks what the participants think is most important, which may or may not be diversity of perspectives. At least some people think that diversity of perspectives is a misguided goal, that erodes core values.

Are you saying that this implies that "EA wants more of the same" because some new EA (call him Alex) will be paired with a partner (Barbra) who gives one of the above answers, and then Alex will presume that what Barbra said was the "party line" or "the EA answer" or "what everyone thinks"?

3
Jon_Behar
5y
EA skews young, white, male, and quantitative. Imagine you’re someone who doesn’t fit that profile but has EA values, and is trying to decide “is EA for me?” You go to EA Global (where the audience is not very diverse) and go to a Double Crux workshop. If most of the people talk about prioritizing adding AI researchers and hedge fund people (fields that skew young, male, and quanty) it might not feel very welcoming. Basically, I think the question is framed so that it produces a negative externality for the community. And you could probably tweak the framing to produce a positive externality for the community, so I’d suggest considering that option unless there’s a compelling reason to favor the current framing. People can have a valuable discussion about which new perspectives would be helpful to add, even if they don’t think increasing diversity of perspectives is EA’s most important priority.

I like these modified questions.

The reason why the original formulations are what they are is to get out of the trap of everyone agreeing that "good things are good", and to draw out specific disagreements.

The intention is that each of these has some sort of crisp "yes or no" or "we should or shouldn't prioritize X". But also the crisp "yes or no" is rooted in a detailed, and potentially original, model.

What sort of discussions does this question generate?

Here are demographics that I've heard people list.

  • AI researchers (because of relevance to x-risk)
  • Teachers (for spreading the movement)
  • Hedge fund people (who are rich and analytical)
  • Startup founders (who are ambitious and agenty)
  • Young people/ college students (because they're the only people that can be sold on weird ideas like EA)
  • Ops people (because 80k and CEA said that's what EA needs)

All of these have very different implications about what is most important on the margin in EA.

1
Jon_Behar
5y
Aside from Ops people, I’d guess the other five groups are already strongly overrepresented in EA. This exercise may be sending an unintended message that “EA wants more of the same”, and I suspect you could tweak the question to convey “EA values diverse perspectives” without sacrificing any quality in the discussion. Over the long-term, you’ll get much better discussions because they’ll incorporate a broader set of perspectives.

I think using EA examples in the double crux game may be a bad idea because it will inadvertently lead EAs to come away with a more simplistic impression of these issues than they should.

I mostly teach Double Crux and related at CFAR workshops (the mainline, and speciality / alumni workshops). I've taught it at EAG 4 times (twice in 2017), and I can only observe a few participants in a session. So my n is small, and I'm very unsure.

But it seems to me that using EA examples mostly has the effect of fleshing out understanding of other EA's views, more t... (read more)

0
Evan_Gaensbauer
5y
Yeah, reading your comments has assuaged my concerns since based on your observations the sign of the consequences of double-cruxing on EA example questions seems more unclear than clearly negative, and likely slightly positive. In general it seems like a neat exercise that is interesting but just doesn't provide enough time to leave EAs with any impression of these issues much stronger than the one they came in with. I am still thinking of making a Google Form with my version of the questions, and then posing them to EAs, to see what kind of responses are generated as an (uncontrolled) experiment. I'll let you know if I do so.

I strongly agree that more EAs doing independent thinking really important, and I'm very interested in interventions that push in that direction. In my capacity as a CFAR instructor and curriculum developer, figuring out ways to do this is close to my main goal.

I think many individual EAs should be challenged to generate less confused models on these topics, and from there between models is when deliberation like double crux should start.

Strongly agree.

I don't think in the span of only a couple minutes either side of a double crux game will generate

... (read more)

I'm not sure how much having a "watered down" version of EA ideas in the zeitgeist helps because, I don't have a clear sense of how effective most charities are.

If the difference between the median charity and the most impactful charity is 4 orders of magnitude ($1 to the most impactful charities does as much good as $1000 to the the median charity), then even a 100x improvement from the median charity is not very impactful. It's still only 1% as good a donating to the best charity. If that were the case, it's probably more efficient to just aim... (read more)

1
Jon_Behar
5y
Definitely agree on the value of spreading basic principles, though I think we also need to focus on some charity-specific themes given that we want to change giving behavior. In addition to the general frameworks you mention, I think it’s valuable to promote “intentional”, “informed”, and “impactful” giving as these are very uncontroversial ideas. And while it’s most valuable when someone buys into all three of those notions in a big way, there’s also value to getting a lot of people to buy in partially. If millions more people see the value of informed giving, incentives will improve and new products will emerge to meet that demand. FWIW, I think the more accessible approach makes sense even in a world with huge variation in impact across charities. I think you’ll get more money to the “elite” charities if you have a culture where people seek out the best cancer charity they can find, the best local org they can find, etc vs trying “to get more people to adopt the whole EA mindset.”

It seems to me that in many cases the specific skills that are needed are both extremely rare and not well captured by the standard categories.

For instance, Paul Christiano seems to me to be an enormous asset to solving the core problems of AI safety. If "we didn't have a Paul" I would be willing to trade huge amounts of EA resources to have him working on AI safety, and I would similarly trade huge resources to get another Paul-equivalent working on the problem.

But it doesn't seem like Paul's skillset is one that I can easily select for. He's kn... (read more)

7
toonalfrink
5y
How about this: you, as someone already grappling with these problems, present some existing problems to a recrutee, and ask them to come up with some one-paragraph descriptions of original solutions. You read these, and introspect whether they give you a sense of traction/quality, or match solutions that have been proposed by experts you trust (that they haven't heard of). I'm looking to do a pilot for this. If anyone would like to join, message me.

There aren't many people with PhD-level research experience in relevant fields who are focusing on AI safety, so I think it's a bit early to conclude these skills are "extremely rare" amongst qualified individuals.

AI safety research spans a broad range of areas, but for the more ML-oriented research the skills are, unsurprisingly, not that different from other fields of ML research. There are two main differences I've noticed:

  • In AI safety you often have to turn ill-defined, messy intuitions into formal problem statements before you can start w
... (read more)

In the short term, senior hires are most likely to come from finding and onboarding people who already have the required skills, experience, credentials and intrinsic motivation to reduce x-risks.

Can you be more specific about the the required skills and experience are?

Skimming the report, you say "All senior hires require exceptionally good judgement and decision-making." Can you be more specific about what that means and how it can be assessed?

7
oliverbramford
5y
The required skills and experience of senior hires vary between fields and roles; senior x-risk staff are probably best-placed to specify these requirements in their respective domains of work. You can look at x-risk job ads and recruitment webpages of leading x-risk orgs for some reasonable guidance. (we are developing a set of profiles for prospective high-impact talent, to give a more nuanced picture of who's required). "Exceptionally good judgement and decision-making", for senior x-risk talent, I believe requires: * a thorough and nuanced understanding of EA concepts and how they apply to the context * good pragmatic foresight - an intuitive grasp of the likely and possible implications of one's actions * a conscientious risk-aware attitude, with the ability to think clearly and creatively to identify failure modes Assessing good-judgement and decision-making is hard; it's particularly hard to assess the consistency of a person's judgement without knowing/working with them over at least several months. Some methods: * Speaking to a person can quickly clarify their level of knowledge of EA concepts and how they apply to the context of their role. * Speaking to references could be very helpful, to get a picture of how a person updates their beliefs and actions. * Actually working with them (perhaps via a work trial, partnership or consultancy project) is probably the best way to test whether a person is suitable for the role * A critical thinking psychometric test may plausibly be a good preliminary filter, but is perhaps more relevant for junior talent. A low score would be a big red flag, but a high score is far from sufficient to imply overall good judgement and decision-making.

It seems to me that in many cases the specific skills that are needed are both extremely rare and not well captured by the standard categories.

For instance, Paul Christiano seems to me to be an enormous asset to solving the core problems of AI safety. If "we didn't have a Paul" I would be willing to trade huge amounts of EA resources to have him working on AI safety, and I would similarly trade huge resources to get another Paul-equivalent working on the problem.

But it doesn't seem like Paul's skillset is one that I can easily select for. He's kn... (read more)

Intellectual contributions to the rationality community: including CFAR’s class on goal factoring

Just a note. I think this might be a bit missleading. Geoff, and other members of Leverage research taught a version of goal factoring at some early CFAR workshops. And Leverage did develop a version of goal factoring inspired by CT. But my understanding is that CFAR staff independently developed goal factoring (starting from an attempt to teach applied consequentialism), and this is an instance of parallel development.

[I work for CFAR, though I had not yet joined the EA or rationality community in those early days. I am reporting what other longstanding CFAR staff told me.]