How long ago did you attend your CFAR workshop? My sense is that the content CFAR teaches and who the teachers are have changed a lot over the years. Maybe they've gotten better (or worse?) about teaching the "true form."
(Or maybe you were saying you also didn't get the "true form" even in the more recent AIRCS workshops?)
So, to clarify: this program is for people who are already mostly sure they want to work on AI Safety? That is, a person who is excited about ML, and would maaaaybe be interested in working on safety-related topics, if they found those topics interesting, is not who you are targeting?
If you feel comfortable sharing: who are the people whose judgment on this topic you think is better?
Yeah, I am sympathetic to that. I am curious how you decide where to draw the line here. For instance, you were willing to express judgment of QRI elsewhere in the comments.
Would it be possible to briefly list the people or orgs whose work you *most* respect? Or would the omissions be too obvious?
I sometimes wish there were good ways to more broadly disseminate negative judgments or critiques of orgs/people from thoughtful and well-connected people. But, understandably, people are sensitive to that kind of thing, and it can end up eating a lot of time and weakening relationships.
What are your regular go-to sources of information online? That is, are there certain blogs you religiously read? Vox? Do you follow the EA Forum or LessWrong? Do you mostly read papers that you find through some search algorithm you previously set up? Etc.
4) You seem like you have had a natural strong critical thinking streak since you were quite young (e.g., you talk about thinking that various mainstream ideas were dumb). Any unique advice for how to develop this skill in people who do not have it naturally?
For the record, I think that I had mediocre judgement in the past and did not reliably believe true things, and I sometimes had made really foolish decisions. I think my experience is mostly that I felt extremely alienated from society, which meant that I looked more critically on many common beliefs than most people do. This meant I was weird in lots of ways, many of which were bad and some of which were good. And in some cases this meant that I believed some weird things that feel like easy wins, eg by thinking that people were absurdly callous about cau...
3) I've seen several places where you criticize fellow EAs for their lack of engagement or critical thinking. For example, three years ago, you wrote:
I also have criticisms about EAs being overconfident and acting as if they know way more than they do about a wide variety of things, but my criticisms are very different from [Holden's criticisms]. For example, I’m super unimpressed that so many EAs didn’t know that GiveWell thinks that deworming has a relatively low probability of very high impact. I’m also unimpressed by how...
I no longer feel annoyed about this. I'm not quite sure why. Part of it is probably that I'm a lot more sympathetic when EAs don't know things about AI safety than global poverty, because learning about AI safety seems much harder, and I think I hear relatively more discussion of AI safety now compared to three years ago.
One hypothesis is that 80000 Hours has made various EA ideas more accessible and well-known within the community, via their podcast and maybe their articles.
2) Somewhat relatedly, there seems to be a lot of angst within EA related to intelligence / power / funding / jobs / respect / social status / etc., and I am curious if you have any interesting thoughts about that.
I feel really sad about it. I think EA should probably have a communication strategy where we say relatively simple messages like "we think talented college graduates should do X and Y", but this causes collateral damage where people who don't succeed at doing X and Y feel bad about themselves. I don't know what to do about this, except to say that I have the utmost respect in my heart for people who really want to do the right thing and are trying their best.
I don't think I have very coherent or reasoned thoughts on how we should handle this, and I try to defer to people who I trust whose judgement on these topics I think is better.
I used to expect 80,000 Hours to tell me how to have an impactful career. Recently, I've started thinking it's basically my own personal responsibility to figure it out. I think this shift has made me much happier and much more likely to have an impactful career.
80,000 Hours targets the most professionally successful people in the world. That's probably the right idea for them - giving good career advice takes a lot of time and effort, and they can't help everyone, so they should focus on the people with the most career potential.
But, ...
"Do you have any advice for people who want to be involved in EA, but do not think that they are smart or committed enough to be engaging at your level?"--I just want to say that I wouldn't have phrased it quite like that.
One role that I've been excited about recently is making local groups be good. I think that having better local EA communities might be really helpful for outreach, and lots of different people can do great work with this.
Reading through some of your blog posts and other writing, I get the impression that you put a lot of weight on how smart people seem to you. You often describe people or ideas as "smart" or "dumb," and you seem interested in finding the smartest people to talk to or bring into EA.
I am feeling a bit confused by my reactions. I think I am both a) excited by the idea of getting the "smart people" together so that they can help each other think through complicated topics and make more good things happen, but b) I feel a bit sad a...
4) You seem like you have had a natural strong critical thinking streak since you were quite young (e.g., you talk about thinking that various mainstream ideas were dumb). Any unique advice for how to develop this skill in people who do not have it naturally?
3) I've seen several places where you criticize fellow EAs for their lack of engagement or critical thinking. For example, three years ago, you wrote:
I also have criticisms about EAs being overconfident and acting as if they know way more than they do about a wide variety of things, but my criticisms are very different from [Holden's criticisms]. For example, I’m super unimpressed that so many EAs didn’t know that GiveWell thinks that deworming has a relatively low probability of very high impact. I’m also unimpressed by how...
2) Somewhat relatedly, there seems to be a lot of angst within EA related to intelligence / power / funding / jobs / respect / social status / etc., and I am curious if you have any interesting thoughts about that.
In "Ways I've changed my mind about effective altruism over the past year" you write:
I feel very concerned by the relative lack of good quality discussion and analysis of EA topics. I feel like everyone who isn’t working at an EA org is at a massive disadvantage when it comes to understanding important EA topics, and only a few people manage to be motivated enough to have a really broad and deep understanding of EA without either working at an EA org or being really close to someone who does.
I am not sure if you still feel this way, b...
I still feel this way, and I've been trying to think of ways to reduce this problem. I think the AIRCS workshops help a bit, I think that my SSC trip was helpful and EA residencies might be helpful.
A few helpful conversations that I've had recently with people who are strongly connected to the professional EA community, which I think would be harder to have without information gained from these strong connections:
Is there any public information on the AI Safety Retraining Program other than the MIRI Summer Update and the Open Phil grant page?
I am wondering:
1) Who should apply? How do they apply?
2) Have there been any results yet? I see two grants were given as of Sep 1st; have either of those been completed? If so, what were the outcomes?
You write:
"I think that the field of AI safety is growing in an awkward way. Lots of people are trying to work on it, and many of these people have pretty different pictures of what the problem is and how we should try to work on it. How should we handle this? How should you try to work in a field when at least half the "experts" are going to think that your research direction is misguided?"
What are your preliminary thoughts on the answers to these questions?
In your opinion, what are the most helpful organizations or groups working on AI safety right now? And why?
In parallel: what are the least helpful organizations or groups working on (or claiming to work on) AI safety right now? And why?
This article reminded me that I sometimes wish for a competitor to 80K. 80K tries to do a lot: research and write good career advice, coach people, create advanced and accessible EA-related content (via the podcast), network with and connect extremely high-impact people... It is not obvious to me that these activities should all be grouped together. Maybe they should. They have some synergies.
But, for instance, my impression is that it is difficult to receive coaching from 80K, unless you are especially impressive. Perhaps there could be a career-coaching...
I broadly agree with this article, but some part of me felt... uncomfortable?... with the topic. So I tried to give voice to that part of me. Very uncertain about this, and it is a bit confusing.
--
I think we often build up pictures/stories of ourselves based on our regular habits/ actions. If I exercise every day, I start to think of myself as athletic/ healthy/ strong. If I wipe the counters in the kitchen, it contributes to my sense of responsibility/ care-taking/ cleanliness. If I do X that I believe is wasteful (i.e. common opinion says that X is wast...
Believing that my time is really valuable can lead to me making more wasteful decisions. Decisions like: "It is totally fine for me to buy all these expensive ergonomic keyboards simultaneously on Amazon and try them out, then throw away whichever ones do not work for me." Or "I will buy this expensive exercise equipment on a whim to test out. Even if I only use it once and end up trashing it a year later, it does not matter."
...
The thinking in the examples above worries me. People are bad at reasoning about when to make exceptions to r...
I appreciate the other comments about how the model does not take into account the base impact of the charities. Also:
1) Do trustees take on some form of legal responsibility for charities? If so, it could be risky to get involved with a charity you know little about.
2) Do you know how many other trustees are typically already involved? If you are one of three, I could see you having an easier time influencing a charity than if you are, say, one of seven.
Personal opinion: the circular layout seems more useful. I like that it more clearly demonstrates a) entities that are connected to only one other entity in the graph (example: Inst. Phil. Research is only connected to BERI, Thiel is only connected to MIRI), and b) how many arrows are going into each node (example: it's easier to see that MIRI has the widest range of supporters of this group, followed by CEA and CFAR).
Does this mean you no longer endorse the original statement you made ("there is little evidence of benefit from schooling")?
I'm feeling confused... I basically agreed with Khorton's skepticism about that original claim, and now it sounds like you agree with Khorton too. It seems like you, in fact, believe something quite different from the original claim; your actual belief is something more like: "for some children, the benefits of schooling will not outweigh the torturous experience of attending school." But it doesn't ...
I am confused about what exactly you are trying to communicate with this post and its partner (https://forum.effectivealtruism.org/posts/FDczXfT4xetcRRWtm/an-effective-altruist-plan-for-socialism). My sense is that you are saying something like:
1) Look, socialist-leaning and capitalist-leaning EAs, the policies you probably want are essentially the same, work together and make something happen, and
2) Look, EAs, I can write nearly identical posts with titles that will make you assume the posts are at odds - challenge your assumptions. Realize the power lang...
I like your encouragement to create more art. However, I noticed cringing at some of your ideas in the appendix. I worry that they would end up being "poorly executed cultural artefacts [that] may put EA into disrepute" as you put it.
I do not feel capable of explaining exactly where the cringe reaction is coming from, but a few examples:
I do not like the idea in Beautopia of equating physical appearance with moral goodness, given that a) it is already an issue that people assume positive personality traits when they see physically attractive peo...
"...but do not think that they are smart or committed enough to be engaging at your level?" was intended to be from a generic insecure (or realistic) EA's perspective, not yours. Sorry for my confusing phrasing.