Of course feel free not to share, but I'd be curious for a photo of the inside of the office! Partly I am curious because I imagine how nice of a place it is (and e.g. whether there is a fridge) could make a big difference re: how much people tend to hang out there.
Relatedly: Heuristics That Almost Always Work
Concept-shaped holes are such a useful concept; from what I can tell, it seems like a huge amount of miscommunication happens because people have somewhat different understandings of the same word.
I think I interpret people's advice and opinions pretty differently now that I'm aware of concept-shaped holes.
It seems possible to me that you have a concept-shaped hole for the concept "bad people"
I have found it useful and interesting to build a habit of noticing an intuition and then thinking of arguments for why that intuition is worth listening to. It has caused me to find some pretty interesting dynamics that it seems like naive consequentialists/utilitarians aren't aware of.One concern about this is that you might be able to find arguments for any conclusion that you seek out arguments for; the counter to this is that your intuition doesn't give random answers, and is actually fairly reliably correct, hence explicit arguments that explain your... (read more)
I'm noticing two ways of interpreting/reacting to this argument:
Makes sense - thanks Asya!
This is good to know - thank you for making this connection!
Notably, (and I think I may feel more strongly about this than others in the space), I’m generally less excited about organizers who are ambitious or entrepreneurial, but less truth-seeking, or have a weak understanding of the content that their group covers.
Do you feel that you'd rather have the existing population of community builders be a bit more ambitious or a bit more truth-seeking? Or: if you could suggest improvement on only one of these virtues to community builders, which would you choose? ETA: Does the answer feel obvious to you, or is it a close call?
"Interesting" is subjective, but there can still be areas that a population tends to find interesting. I find David's proposals of what the EA population tends to find interesting plausible, though ultimately the question could be resolved with a survey
Thanks for this! I enjoyed the refresher + summaries of some of the posts I hadn't yet read.
I'm not familiar with the opposite type of circle format
Me neither really - I meant to refer to a hypothetical activity.And thanks for the examples!
Does anyone have an idea why doom circles have been so successful compared to the opposite type of circle where people say nice things about each other that they wouldn't normally say?
Relatedly, I have a hypothesis that the EA/rationalist communities are making mistakes that they wouldn't make if they had more psychology expertise. For instance, my impression is that many versions of positivity measurably improve performance/productivity and many versions of negativity worsen performance (though these impressions aren't based on much research), and I suspect if people knew this, they would be more interested in trying the opposite of a doom circle.
Ah I see — thanks!
Is it correct that this assumes that the marginal cost of supporting a user doesn’t change depending on the firm’s scale? It seems like some amount of the 50x difference between EAF and reddit could be explained by the EAF having fewer benefits of scale since it is a smaller forum (though should this be counter balanced by it being a higher quality forum?)
Continuing the discussion since I am pretty curious how significant the 50x is, in case there is a powerful predictive model here
Could someone show the economic line of reasoning one would use to predict ex ante from the Nordhaus research that the Forum would have 50x more employees per user? (FYI, I might end up working it out myself.)
Maybe someone should user-interview or survey Oregonians to see what made people not want to vote for Carrick
Here are some impressions of him from various influential Oregonians. No idea how these six were chosen from the "more than a dozen" originally interviewed.
Just some random Twitter comments I've seen:
"I received a flyer for Flynn multiple times a week for months. Made me 100% sure I wasn’t going to vote for him."
"Great! I voted for her. There IS a point where you can run too many commercials. Was turned off by the non stop deluge of ads from the Flynn PAC. A little more restraint might have tricked people. Way too obvious of an attempt to buy a seat."
"I guess you really can't buy anything with crypto."
"Crypto bro goes down just like crypto did"(https://twitter.com/Redistrict/status/1526765055391432704)
No worries! Seemed mostly coherent to me, and please feel free to respond later.I think the thing I am hung up on here is what counts as "happiness" and "suffering" in this framing.
Could you try to clarify what you mean by the AI (or an agent in general) being "better off?"
I’m actually a bit confused here, because I'm not settled on a meta-ethics: why isn't it the case that a large part of human values is about satisfying the preferences of moral patients, and human values consider any or most advanced AIs as non-trivial moral patients?I don't put much weight on this currently, but I haven't ruled it out.
If you had to do it yourself, how would you go about a back-of-the-envelope calculation for estimating the impact of a Flynn donation?Asking this question because I suspect that other people in the community won't actually do this, and since you are maybe one of the best-positioned people to do this since you seem interested in it.
Yeah, I had to look this up
e.g. from P(X) = 0.8, I may think in a week I will - most of the time - have notched this forecast slightly upwards, but less of the time notching it further downwards, and this averages out to E[P(X) [next week]] = 0.8.
I wish you had said this in the BLUF -- it is the key insight, and the one that made me go from "Greg sounds totally wrong" to "Ohhh, he is totally right"ETA: you did actually say this, but you said it in less simple language, which is why I missed it
I really like your drawings in section 2 -- conveys the idea surprisingly succinctly
Note to self: I should really, really try to avoid speaking like this when facilitating in the EA intro fellowship
The entire time I've been thinking about this, I've been thinking of utility curves as logarithmic, so you don't have to sell me on that. I think my original comment here is another way of understanding why tractability perhaps doesn't vary much between problems, not within a problem.
Ah, I see now that within a problem, tractability shouldn't change as the problem gets less neglected if you assume that u(r) is logarithmic, since then the derivative is like 1/R, making tractability like 1/u_total
But why is tractability roughly constant with neglectedness in practice? Equivalently, why are there logarithmic returns to many problems?
I don't see why logarithmic utility iff tractability doesn't change with neglectedness.
There was an inference there -- you need tractability to balance with the neglectedness to add up to equal cost-effectiveness
I don't know if I understand why tractability doesn't vary much. It seems like it should be able to vary just as much as cost-effectiveness can vary. For example, imagine two problems with the same cost-effectiveness, the same importance, but one problem has 1000x fewer resources invested in it. Then the tractability of that problem should be 1000x higher [ETA: so that the cost-effectiveness can still be the same, even given the difference in neglectedness.]Another example: suppose an AI safety researcher solved AI alignment after 20 years of re... (read more)
When I formalize "tractability" it turns out to be directly related to neglectedness. If R is the number of resources invested in a problem currently, and u(r) is the difference in world utility from investing 0 v.s. r resources into the problem, and u_total is u(r) once the problem is solved, then tractability turns out to be:Tractability = u'(R) * R * 1/ u_total
So I'm not sure I really understand yet why tractability wouldn't change much with neglectedness. I have preliminary understanding, though, which I'm writing up in another comment.
each additional doubling will solve a similar fraction of the problem, in expectation
Aren't you assuming the conclusion here?
As a note, it's only ever the case that something is good "in expectation" from a particular person's point of view or from a particular epistemic state. It's possible for someone to disagree with me because they know different facts about the world, and so for instance think that different futures are more or less likely. In other words, the expected value referred to by the term "expectation" is subtly an expected value conditioned on a particular set of beliefs.
I disagree with your reasons for downvoting the post, since I generally judge posts on their content, but I do appreciate your transparency here and found it interesting to see that you disliked a post for these reasons. I’m tempted to upvote your comment, though that feels weird since I disagree with it
Because of Evan's comment, I think that the signaling consideration here is another example of the following pattern:
Someone suggests we stop (or limit) doing X because of what we might signal by doing X, even though we think X is correct. But this person is somewhat blind to the negative signaling effects of not living up to our own stated ideals (i.e. having integrity). It turns out that some more rationalist-type people report that they would be put off by this lack of honesty and integrity (speculation: perhaps because these types have an automatic nor... (read more)
Maybe someone should compile a bunch of exercises that train the muscle of formalizing intuitions
FWIW, Chris didn't say what you seem to be claiming he said
Oh, interesting, thanks for this.
I think before assuming you made a mistake you could add the question of "if someone did that thing to me, could I easily forgive them?" If the answer is yes, then maybe don't sweat it because generally we think of ourselves way more than we think others do
I really like this advice, and I just realized I use this trick sometimes.
I might make it clearer that your bullet points are what you recommend people not do. I was skimming and at first and was close to taking away the opposite of what you intended.
I might add something to the tune of "have them lead the conversation by letting their questions and vague feelings do the steering"
Thank you Peter! Definitely taking a look at the books and resources. Also, I now link your comment in the tldr of the post :)
I have seen little evidence that FTX Future Fund (FFF) or EA Infrastructure Fund (EAIF) have lowered their standards for mainline grants
FFF is new, so that shouldn't be a surprise.
I’d be curious to see how many people each of these companies employ + the % of employees which are EAs
[First comment was written without reading the rest of your comment. This is in reply to the rest.]
Re: whether a company adds intrinsic value, I agree, it isn't necessarily counterfactually good, but also that's sort of the point of a heuristic -- most likely you can think of cases where all of these heuristics fail; by prescribing a heuristic, I don't mean to say the heuristic always holds, instead just that using the heuristic v.s. not happens to, on average, lead to better outcomes.Serial entrepreneur seems to also be a decent heuristic.
I haven't thought about it deeply, but the main thing I was thinking here was that I think founders get the plurality of credit for the output of a company, partly because I just intuitively believe this, and partly because, apparently, not many people found things. This is an empirical claim, and it could be false e.g. in worlds where everyone tries to be a founder, and companies never grow, but my guess is that the EA community is not in that world. So this heuristic tracks (to some degree) high counterfactual impact/neglectedness.
This heuristic is meant to be a way of finding good opportunities to learn (which is a way to invest in yourself to improve your future impact) and it’s not meant to be perfect.
I'm still not very convinced of your original point, though -- when I simulate myself becoming non-vegan, I don't imagine this counterfactually causing me to lose my concern for animals (nor does it seem like it would harm my epistemics? Though not sure if I trust my inner sim here. It does seem like that, if anything, going non-vegan would help my epistemics, since, in my case, being vegan wastes enough time such that it is harmful for future generations to be vegan, and by continuing to be vegan I am choosing to ignore that fact).
it would make me deeply sad and upset
That makes sense, yeah. And I could see this being costly enough such that it's best to continue avoiding meat.
No--and when I wrote it, I meant to direct it at anyone involved in the comments discussion. I probably should have made that clearer in the comment. Also, I probably should have read all of the comments before commenting (e.g. are you referring to some comment thread that it seemed like I was replying to?), but am time-limited.Also, for more context, I wrote this comment because I felt concerned about bottom-line/motivated reasoning causing people to apply the sorts of arguments for action that they don't apply elsewhere to argue for veganism, and I felt ... (read more)