All of elle's Comments + Replies

"...but do not think that they are smart or committed enough to be engaging at your level?" was intended to be from a generic insecure (or realistic) EA's perspective, not yours. Sorry for my confusing phrasing.

How long ago did you attend your CFAR workshop? My sense is that the content CFAR teaches and who the teachers are have changed a lot over the years. Maybe they've gotten better (or worse?) about teaching the "true form."

(Or maybe you were saying you also didn't get the "true form" even in the more recent AIRCS workshops?)

So, to clarify: this program is for people who are already mostly sure they want to work on AI Safety? That is, a person who is excited about ML, and would maaaaybe be interested in working on safety-related topics, if they found those topics interesting, is not who you are targeting?

3
Buck
4y
Yeah, I am not targeting that kind of person. Someone who is excited about ML and skeptical of AI safety but interested in engaging a lot with AI safety arguments for a few months might be a good fit.
elle
4y14
0
0

If you feel comfortable sharing: who are the people whose judgment on this topic you think is better?

elle
4y12
0
0

Yeah, I am sympathetic to that. I am curious how you decide where to draw the line here. For instance, you were willing to express judgment of QRI elsewhere in the comments.

Would it be possible to briefly list the people or orgs whose work you *most* respect? Or would the omissions be too obvious?

I sometimes wish there were good ways to more broadly disseminate negative judgments or critiques of orgs/people from thoughtful and well-connected people. But, understandably, people are sensitive to that kind of thing, and it can end up eating a lot of time and weakening relationships.


What are your regular go-to sources of information online? That is, are there certain blogs you religiously read? Vox? Do you follow the EA Forum or LessWrong? Do you mostly read papers that you find through some search algorithm you previously set up? Etc.

4
Buck
4y
I don't consume information online very intentionally. Blogs I often read: * Slate Star Codex * The Unit of Caring * Bryan Caplan (despite disagreeing with him a lot, obviously) * Meteuphoric (Katja Grace) * Paul Christiano's various blogs I often read the Alignment Newsletter. I mostly learn things from hearing about them from friends.
elle
4y11
0
0

4) You seem like you have had a natural strong critical thinking streak since you were quite young (e.g., you talk about thinking that various mainstream ideas were dumb). Any unique advice for how to develop this skill in people who do not have it naturally?

Buck
4y27
0
0

For the record, I think that I had mediocre judgement in the past and did not reliably believe true things, and I sometimes had made really foolish decisions. I think my experience is mostly that I felt extremely alienated from society, which meant that I looked more critically on many common beliefs than most people do. This meant I was weird in lots of ways, many of which were bad and some of which were good. And in some cases this meant that I believed some weird things that feel like easy wins, eg by thinking that people were absurdly callous about cau... (read more)

elle
4y11
0
0

3) I've seen several places where you criticize fellow EAs for their lack of engagement or critical thinking. For example, three years ago, you wrote:

I also have criticisms about EAs being overconfident and acting as if they know way more than they do about a wide variety of things, but my criticisms are very different from [Holden's criticisms]. For example, I’m super unimpressed that so many EAs didn’t know that GiveWell thinks that deworming has a relatively low probability of very high impact. I’m also unimpressed by how
... (read more)
Buck
4y12
0
0

I no longer feel annoyed about this. I'm not quite sure why. Part of it is probably that I'm a lot more sympathetic when EAs don't know things about AI safety than global poverty, because learning about AI safety seems much harder, and I think I hear relatively more discussion of AI safety now compared to three years ago.

One hypothesis is that 80000 Hours has made various EA ideas more accessible and well-known within the community, via their podcast and maybe their articles.

elle
4y27
0
0

2) Somewhat relatedly, there seems to be a lot of angst within EA related to intelligence / power / funding / jobs / respect / social status / etc., and I am curious if you have any interesting thoughts about that.

Buck
4y27
0
0

I feel really sad about it. I think EA should probably have a communication strategy where we say relatively simple messages like "we think talented college graduates should do X and Y", but this causes collateral damage where people who don't succeed at doing X and Y feel bad about themselves. I don't know what to do about this, except to say that I have the utmost respect in my heart for people who really want to do the right thing and are trying their best.

I don't think I have very coherent or reasoned thoughts on how we should handle this, and I try to defer to people who I trust whose judgement on these topics I think is better.

elle
4y15
0
0

1) Do you have any advice for people who want to be involved in EA, but do not think that they are smart or committed enough to be engaging at your level? Do you think there are good roles for such people in this community / movement / whatever? If so, what are those roles?

I used to expect 80,000 Hours to tell me how to have an impactful career. Recently, I've started thinking it's basically my own personal responsibility to figure it out. I think this shift has made me much happier and much more likely to have an impactful career.

80,000 Hours targets the most professionally successful people in the world. That's probably the right idea for them - giving good career advice takes a lot of time and effort, and they can't help everyone, so they should focus on the people with the most career potential.

But, ... (read more)

Buck
4y10
0
0

"Do you have any advice for people who want to be involved in EA, but do not think that they are smart or committed enough to be engaging at your level?"--I just want to say that I wouldn't have phrased it quite like that.

One role that I've been excited about recently is making local groups be good. I think that having better local EA communities might be really helpful for outreach, and lots of different people can do great work with this.

elle
4y56
0
0

Reading through some of your blog posts and other writing, I get the impression that you put a lot of weight on how smart people seem to you. You often describe people or ideas as "smart" or "dumb," and you seem interested in finding the smartest people to talk to or bring into EA.

I am feeling a bit confused by my reactions. I think I am both a) excited by the idea of getting the "smart people" together so that they can help each other think through complicated topics and make more good things happen, but b) I feel a bit sad a... (read more)

elle
4y11
0
0

4) You seem like you have had a natural strong critical thinking streak since you were quite young (e.g., you talk about thinking that various mainstream ideas were dumb). Any unique advice for how to develop this skill in people who do not have it naturally?

elle
4y11
0
0

3) I've seen several places where you criticize fellow EAs for their lack of engagement or critical thinking. For example, three years ago, you wrote:

I also have criticisms about EAs being overconfident and acting as if they know way more than they do about a wide variety of things, but my criticisms are very different from [Holden's criticisms]. For example, I’m super unimpressed that so many EAs didn’t know that GiveWell thinks that deworming has a relatively low probability of very high impact. I’m also unimpressed by how
... (read more)
elle
4y27
0
0

2) Somewhat relatedly, there seems to be a lot of angst within EA related to intelligence / power / funding / jobs / respect / social status / etc., and I am curious if you have any interesting thoughts about that.

elle
4y15
0
0

1) Do you have any advice for people who want to be involved in EA, but do not think that they are smart or committed enough to be engaging at your level? Do you think there are good roles for such people in this community / movement / whatever? If so, what are those roles?

elle
4y28
0
0

In "Ways I've changed my mind about effective altruism over the past year" you write:

I feel very concerned by the relative lack of good quality discussion and analysis of EA topics. I feel like everyone who isn’t working at an EA org is at a massive disadvantage when it comes to understanding important EA topics, and only a few people manage to be motivated enough to have a really broad and deep understanding of EA without either working at an EA org or being really close to someone who does.

I am not sure if you still feel this way, b... (read more)

Buck
4y10
0
0

I still feel this way, and I've been trying to think of ways to reduce this problem. I think the AIRCS workshops help a bit, I think that my SSC trip was helpful and EA residencies might be helpful.

A few helpful conversations that I've had recently with people who are strongly connected to the professional EA community, which I think would be harder to have without information gained from these strong connections:

  • I enjoyed a document about AI timelines that someone from another org shared me on.
  • Discussions about how EA outreach should go--how rap
... (read more)

Is there any public information on the AI Safety Retraining Program other than the MIRI Summer Update and the Open Phil grant page?

I am wondering:

1) Who should apply? How do they apply?

2) Have there been any results yet? I see two grants were given as of Sep 1st; have either of those been completed? If so, what were the outcomes?

8
Buck
4y
I don't think there's any other public information. To apply, people should email me asking about it (buck@intelligence.org). The three people who've received one of these grants were all people who I ran across in my MIRI recruiting efforts. Two grants have been completed and a third is ongoing. Of the two people who completed grants, both successfully replicated several deep RL papers. and one of them ended up getting a job working on AI safety stuff (the other took a data science job and hopes to work on AI safety at some point in the future). I'm happy to answer more questions about this.

You write:

"I think that the field of AI safety is growing in an awkward way. Lots of people are trying to work on it, and many of these people have pretty different pictures of what the problem is and how we should try to work on it. How should we handle this? How should you try to work in a field when at least half the "experts" are going to think that your research direction is misguided?"

What are your preliminary thoughts on the answers to these questions?

In your opinion, what are the most helpful organizations or groups working on AI safety right now? And why?

In parallel: what are the least helpful organizations or groups working on (or claiming to work on) AI safety right now? And why?


4
Buck
4y
I feel reluctant to answer this question because it feels like it would involve casting judgement on lots of people publicly. I think that there are a bunch of different orgs and people doing good work on AI safety.
elle
4y14
0
0

This article reminded me that I sometimes wish for a competitor to 80K. 80K tries to do a lot: research and write good career advice, coach people, create advanced and accessible EA-related content (via the podcast), network with and connect extremely high-impact people... It is not obvious to me that these activities should all be grouped together. Maybe they should. They have some synergies.

But, for instance, my impression is that it is difficult to receive coaching from 80K, unless you are especially impressive. Perhaps there could be a career-coaching... (read more)

1
Lars Mennen
3y
Great point on lowering the bar for coaching. fwiw, I think multiple local EA groups are now offering career coaching. I don't know to what extent these are designed from scratch or following 80K, but they should lower the bar. I think having a separate organization (perhaps allowing local groups to offer the programs) could help this though.
elle
5y20
0
0

I broadly agree with this article, but some part of me felt... uncomfortable?... with the topic. So I tried to give voice to that part of me. Very uncertain about this, and it is a bit confusing.

--

I think we often build up pictures/stories of ourselves based on our regular habits/ actions. If I exercise every day, I start to think of myself as athletic/ healthy/ strong. If I wipe the counters in the kitchen, it contributes to my sense of responsibility/ care-taking/ cleanliness. If I do X that I believe is wasteful (i.e. common opinion says that X is wast... (read more)

6
Kirsten
5y
I agree that everyday actions shape our self-perception. I don't believe this has to be all-or-nothing. I have a lot of friends who pride themselves on not being wasteful, but don't know how to sew and won't patch up old clothes. This habit of throwing out holey clothes doesn't stop them from eating the leftovers in their fridge or spending their money carefully. There are a lot of small actions we can do to improve the world. Many of these will also reinforce our identities as caring and thoughtful people. In that sense, they're helpful and aspiring EAs should continue doing them. However, I don't think EA as a community should promote these small actions, unless they're particularly cost-effective. I think prioritising a list of, say, 15 small actions counts as promoting them, because people might feel like they should adopt the top small actions, when actually I think people should just keep doing what they're doing and focus on big wins.
Believing that my time is really valuable can lead to me making more wasteful decisions. Decisions like: "It is totally fine for me to buy all these expensive ergonomic keyboards simultaneously on Amazon and try them out, then throw away whichever ones do not work for me." Or "I will buy this expensive exercise equipment on a whim to test out. Even if I only use it once and end up trashing it a year later, it does not matter."
...
The thinking in the examples above worries me. People are bad at reasoning about when to make exceptions to r
... (read more)

I appreciate the other comments about how the model does not take into account the base impact of the charities. Also:

1) Do trustees take on some form of legal responsibility for charities? If so, it could be risky to get involved with a charity you know little about.

2) Do you know how many other trustees are typically already involved? If you are one of three, I could see you having an easier time influencing a charity than if you are, say, one of seven.

1
Nathan Young
5y
1)I think you would get to research a charity before agreeing. 2)I don't know the answer.

Personal opinion: the circular layout seems more useful. I like that it more clearly demonstrates a) entities that are connected to only one other entity in the graph (example: Inst. Phil. Research is only connected to BERI, Thiel is only connected to MIRI), and b) how many arrows are going into each node (example: it's easier to see that MIRI has the widest range of supporters of this group, followed by CEA and CFAR).

Isn't EA Grants part of CEA? Perhaps there should be an arrow from CEA to EA Grants if so.

Does this mean you no longer endorse the original statement you made ("there is little evidence of benefit from schooling")?

I'm feeling confused... I basically agreed with Khorton's skepticism about that original claim, and now it sounds like you agree with Khorton too. It seems like you, in fact, believe something quite different from the original claim; your actual belief is something more like: "for some children, the benefits of schooling will not outweigh the torturous experience of attending school." But it doesn't ... (read more)

1
RomeoStevens
5y
I think there are two claims. I stand by both, but think arguing them simultaneously causes things like a motte and bailey problem to rear its head.

I am confused about what exactly you are trying to communicate with this post and its partner (https://forum.effectivealtruism.org/posts/FDczXfT4xetcRRWtm/an-effective-altruist-plan-for-socialism). My sense is that you are saying something like:
1) Look, socialist-leaning and capitalist-leaning EAs, the policies you probably want are essentially the same, work together and make something happen, and
2) Look, EAs, I can write nearly identical posts with titles that will make you assume the posts are at odds - challenge your assumptions. Realize the power lang... (read more)

4
kbog
5y
Not 2) haha. Yes, they are very similar. There is a subtext that specific policies are more important than sweeping philosophies, as you say in 1). But I also genuinely think they are good policies and it is useful to have them written down, in ways that can appeal to different kinds of audiences (outside EA maybe).
elle
5y23
0
0

I like your encouragement to create more art. However, I noticed cringing at some of your ideas in the appendix. I worry that they would end up being "poorly executed cultural artefacts [that] may put EA into disrepute" as you put it.

I do not feel capable of explaining exactly where the cringe reaction is coming from, but a few examples:

I do not like the idea in Beautopia of equating physical appearance with moral goodness, given that a) it is already an issue that people assume positive personality traits when they see physically attractive peo... (read more)