All of Jack R's Comments + Replies

Yale EA got an office. How did it go?

Of course feel free not to share, but I'd be curious for a photo of the inside of the office! Partly I am curious because I imagine how nice of a place it is (and e.g. whether there is a fridge) could make a big difference re: how much people tend to hang out there.

4M. Thaddeus Burtell13h
Here's a photo during one of our discussion dinners: On the left, we had a mini fridge, a table with various snacks, a coffee maker, and a water container.
6ThomasWoodside1d
There was a fridge and the snacks were laid out very nicely (though I don't have any pictures, maybe somebody else can share). I personally thought that our room was a pretty nice space, though the hallway/entryway was weird and kind of depressing. The biggest thing that made it nice for me was the view:
Jack R's Shortform

Concept-shaped holes are such a useful concept; from what I can tell, it seems like a huge amount of miscommunication happens because people have somewhat different understandings of the same word.

I think I interpret people's advice and opinions pretty differently now that I'm aware of concept-shaped holes.

0Benjamin Start6d
Yes. This is why language is so difficult. Then there's the added layer of propoganda. It can make two people who "speak the same language" be completely unable to understand each other.
Are "Bad People" Really Unwelcome in EA?

It seems possible to me that you have a concept-shaped hole for the concept "bad people"

Jack R's Shortform

I have found it useful and interesting to build a habit of noticing an intuition and then thinking of arguments for why that intuition is worth listening to. It has caused me to find some pretty interesting dynamics that it seems like naive consequentialists/utilitarians aren't aware of.

One concern about this is that you might be able to find arguments for any conclusion that you seek out arguments for; the counter to this is that your intuition doesn't give random answers, and is actually fairly reliably correct, hence explicit arguments that explain your... (read more)

0Benjamin Start6d
Is there a context for the type of things you are using your intuition for?
Consequentialists (in society) should self-modify to have side constraints

I'm noticing two ways of interpreting/reacting to this argument:

  • "This is incredibly off-putting; these consequentialists aren't unlike charismatic sociopaths who will try to match my behavior to achieve hidden goals that I find abhorrent" (see e.g. Andy Bernard from The Office; currently, this is the interpretation that feels most salient to me)
  • "This is like a value handshake between consequentialists and the rest of society: consequentialists may have different values than many other people (perhaps really only at the tail ends of morality), but it's wort
... (read more)
Consequentialists (in society) should self-modify to have side constraints

This is good to know - thank you for making this connection!

[AMA] Announcing Open Phil’s University Group Organizer and Century Fellowships

Notably, (and I think I may feel more strongly about this than others in the space), I’m generally less excited about organizers who are ambitious or entrepreneurial, but less truth-seeking, or have a weak understanding of the content that their group covers.

Do you feel that you'd rather have the existing population of community builders be a bit more ambitious or a bit more truth-seeking? Or: if you could suggest improvement on only one of these virtues to community builders, which would you choose? ETA: Does the answer feel obvious to you, or is it a close call?

3abergal10d
(The cop-out answer is “I would like the truth-seeking organizers to be more ambitious, and the ambitious organizers to be more truth-seeking”.) If I had to choose one, I think I’d go with truth-seeking. It doesn’t feel very close to me, especially among existing university group effective altruism-related organizers (maybe Claire disagrees), largely because I think there’s already been a big recent push towards ambition there, so I think people are generally already thinking pretty ambitiously. I feel differently about e.g. rationality local group organizers, I wish they would be more ambitious.
Interesting vs. Important Work - A Place EA is Prioritizing Poorly

"Interesting" is subjective, but there can still be areas that a population tends to find interesting. I find David's proposals of what the EA population tends to find interesting plausible, though ultimately the question could be resolved with a survey

A summary of every "Highlights from the Sequences" post

Thanks for this! I enjoyed the refresher + summaries of some of the posts I hadn't yet read.

Doom Circles

I'm not familiar with the opposite type of circle format

Me neither really - I meant to refer to a hypothetical activity.

And thanks for the examples!

Doom Circles

Does anyone have an idea why doom circles have been so successful compared to the opposite type of circle where people say nice things about each other that they wouldn't normally say?

Relatedly, I have a hypothesis that the EA/rationalist communities are making mistakes that they wouldn't make if they had more psychology expertise. For instance, my impression is that many versions of positivity measurably improve performance/productivity and many versions of negativity worsen performance (though these impressions aren't based on much research), and I suspect if people knew this, they would be more interested in trying the opposite of a doom circle.

4Kirsten1mo
As a teacher, I've generally found it to be the case that specific positive feedback ("keep doing this!") is the most useful way of improving someone's performance, followed by specific advice ("you could achieve X if you tried Y", "why not experiment with Z and see if it helps?").
3Amy Labenz1mo
I'm not familiar with the opposite type of circle format. I have a few events coming up over the next month, so might not get back around to this, but I'd like to put more thought into a format like this. A couple of things that I have done that come to mind: * At a recent retreat, a colleague and I ran something like a doom circle followed by "gratitude/excitement" circles and I quite liked it. * In the "gentle" doom circle I described above we did something like an even split of doom followed by saying nice things. I found the nice really helpful too because I had blindspots about positive things that others in the group could see more easily. Another thing that comes to mind is a quote from the Manger's Handbook "It's downright criminal to hold back positive feedback from people. Don't be afraid to praise even tiny things. Remember that when you give negative feedback you're generally picking up on tiny things." Thanks for raising this. I'll be curious to hear if other people have done things in this direction.
You should join an EA organization with too many employees

Thanks!

Is it correct that this assumes that the marginal cost of supporting a user doesn’t change depending on the firm’s scale? It seems like some amount of the 50x difference between EAF and reddit could be explained by the EAF having fewer benefits of scale since it is a smaller forum (though should this be counter balanced by it being a higher quality forum?)

Continuing the discussion since I am pretty curious how significant the 50x is, in case there is a powerful predictive model here

[This comment is no longer endorsed by its author]Reply
You should join an EA organization with too many employees

Could someone show the economic line of reasoning one would use to predict ex ante from the Nordhaus research that the Forum would have 50x more employees per user? (FYI, I might end up working it out myself.)

3Ben_West3mo
My model was: * It's kind of unclear what a marginal unit is for a software company like Reddit, but let's just say it's one user of their software * Profit maximizing firms produce until marginal cost = marginal benefit * Marginal benefit is 50 times higher * Therefore firms will accept 50 times higher marginal cost * Let's suppose that the percentage of this additional marginal cost which goes to labor remains unchanged, resulting in 50 times more employees per user 30 seconds of thought can identify a bunch of problems with this model, but I think the underlying insight that firms would hire more labor is broadly correct. I would be interested to hear if people think this is wildly off though!
3Thomas Kwa3mo
You can get it from log returns to labor. If impact is k*log(labor) for for-profit firms and 50*k*log(labor) for altruistic firms, the altruistic firms will buy 50x the labor before returns diminish to the same level. I'm not sure this is the right model for companies though.
Some potential lessons from Carrick’s Congressional bid

Maybe someone should user-interview or survey Oregonians to see what made people not want to vote for Carrick

Here are some impressions of him from various influential Oregonians. No idea how these six were chosen from the "more than a dozen" originally interviewed.

Just some random Twitter comments I've seen: 

"I received a flyer for Flynn multiple times a week for months. Made me 100% sure I wasn’t going to vote for him."

"Great! I voted for her. There IS a point where you can run too many commercials. Was turned off by the non stop deluge of ads from the Flynn PAC. A little more restraint might have tricked people. Way too obvious of an attempt to buy a seat."

"I guess you really can't buy anything with crypto."


"Crypto bro goes down just like crypto did"

(https://twitter.com/Redistrict/status/1526765055391432704)
 

No worries! Seemed mostly coherent to me, and please feel free to respond later.

I think the thing I am hung up on here is what counts as "happiness" and "suffering" in this framing.

Could you try to clarify what you mean by the AI (or an agent in general) being "better off?"

1Zach Stein-Perlman3mo
I don't know much metaethics jargon, so I'll just give an example. I believe that moral goodness (or choice-worthy-ness, if you prefer) is proportional to happiness minus suffering. I believe that happiness and suffering are caused by certain physical processes. A system could achieve its goals (that is, do what we would colloquially describe as achieving goals, although I'm not sure how to formalize "goals") without being happier. For other theories of wellbeing, a system could generally achieve its goals without meeting those wellbeing-criteria. (Currently exhausted, apologies for incoherence.)

I’m actually a bit confused here, because I'm not settled on a meta-ethics: why isn't it the case that a large part of human values is about satisfying the preferences of moral patients, and human values consider any or most advanced AIs as non-trivial moral patients?

I don't put much weight on this currently, but I haven't ruled it out.

7Zach Stein-Perlman3mo
For humans, preference-satisfaction is generally a good proxy for life-quality-improvement. For AI (or arbitrary agents), if we call whatever they seek to maximize "preferences" (which might be misleading in that for strict definitions of "preferences" they might not have preferences), it does not automatically follow that satisfying those preferences makes them better off in any way. The paperclipper doesn't make paperclips because it loves paperclips. It just makes paperclips because that's what it was programmed or trained to do.
Choosing causes re Flynn for Oregon

If you had to do it yourself, how would you go about a back-of-the-envelope calculation for estimating the impact of a Flynn donation?

Asking this question because I suspect that other people in the community won't actually do this, and since you are maybe one of the best-positioned people to do this since you seem interested in it.

3Zach Stein-Perlman3mo
I'm not sure what my estimate would be -- probably ballpark of a marginal $1 buying 1/500M of a counterfactual election in a race with as much money as this one and where the candidate was a dark horse, more effective in races with less money already or where the race was closer. To actually make an estimate I'd feel comfortable with, I'd have to look for studies on the effect of money on elections; if studies gave a consistent picture, there wouldn't really be any more work to do.
Rational predictions often update predictably*

e.g. from P(X) = 0.8, I may think in a week I will - most of the time - have notched this forecast slightly upwards, but less of the time notching it further downwards, and this averages out to E[P(X) [next week]] = 0.8.

I wish you had said this in the BLUF -- it is the key insight, and the one that made me go from "Greg sounds totally wrong" to "Ohhh, he is totally right"

ETA: you did actually say this, but you said it in less simple language, which is why I missed it

3Charles He3mo
I’m not sure but my guess of the argument of the OP is that: Let’s say you are an unbiased forecaster. You get information as time passes. When you start with a 60% prediction that event X will happen, on average, the evidence you will receive will cause you to correctly revise your prediction towards 100%. <Eyes emoji>
Against “longtermist” as an identity

I really like your drawings in section 2 -- conveys the idea surprisingly succinctly

[8] Meditations on Moloch (Alexander, 2014)
  • Note to self: I should really, really try to avoid speaking like this when facilitating in the EA intro fellowship

Hah!

Most problems fall within a 100x tractability range (under certain assumptions)

The entire time I've been thinking about this, I've been thinking of utility curves as logarithmic, so you don't have to sell me on that. I think my original comment here is another way of understanding why tractability perhaps doesn't vary much between problems, not within a problem.

Most problems fall within a 100x tractability range (under certain assumptions)

Ah, I see now that within a problem, tractability shouldn't change as the problem gets less neglected if you assume that u(r) is logarithmic, since then the derivative is like 1/R, making tractability like 1/u_total

Most problems fall within a 100x tractability range (under certain assumptions)

But why is tractability roughly constant with neglectedness in practice? Equivalently, why are there logarithmic returns to many problems?

I don't see why logarithmic utility iff tractability doesn't change with neglectedness.

[This comment is no longer endorsed by its author]Reply
Most problems fall within a 100x tractability range (under certain assumptions)

There was an inference there -- you need tractability to balance with the neglectedness to add up to equal cost-effectiveness

Most problems fall within a 100x tractability range (under certain assumptions)

I don't know if I understand why tractability doesn't vary much. It seems like it should be able to vary just as much as cost-effectiveness can vary. 

For example, imagine two problems with the same cost-effectiveness, the same importance, but one problem has 1000x fewer resources invested in it. Then the tractability of that problem should be 1000x higher [ETA: so that the cost-effectiveness can still be the same, even given the difference in neglectedness.]

Another example:  suppose an AI safety researcher solved AI alignment after 20 years of re... (read more)

2Thomas Kwa3mo
Problems vary on three axes: u'(R), R, u_total. You're expressing this in the basis u'(R), R, u_total. The ITN framework uses the basis I, T, N = u_total, u'(R) * R * 1/ u_total, 1/R. The basis is arbitrary: we could just as easily use some crazy basis like X, Y, Z = u_total^2, R^5, sqrt(u'(R)). But we want to use a basis that's useful in practice, which means variables with intuitive meanings, hence ITN. But why is tractability roughly constant with neglectedness in practice? Equivalently, why are there logarithmic returns to many problems? I don't think it's related to your (1) or (2) because those are about comparing different problems, whereas the mystery is the relationship between u'(R) * R * 1/ u_total and 1/R for a given problem. One model that suggests log returns is if we have to surpass some unknown resource threshold r* (the "difficulty") to solve the problem, and r* ranges over many orders of magnitude with an approximately log-uniform distribution [1]. Owen C-B has empirical evidence [http://www.fhi.ox.ac.uk/law-of-logarithmic-returns/] and some theoretical justification [http://www.fhi.ox.ac.uk/theory-of-log-returns/] for why this might happen in practice. When it does, my post then derives that tractability doesn't vary dramatically between (most) problems. Note that sometimes we know r* or have a different prior, and then our problem stops being logarithmic, like in the second section of the post. This is exactly when tractability can vary dramatically between problems. In your AI alignment subproblem example, we know that alignment takes 20 years, which means a strong update away from the logarithmic prior. [1]: log-uniform distributions over the reals don't exist, so I mean something like "doesn't vary by more than ~1.5x every doubling in the fat part of the distribution".
2Linch3mo
In the ITN framework, this will be modeled under "neglectedness" rather than "tractability"
Most problems fall within a 100x tractability range (under certain assumptions)

When I formalize "tractability" it turns out to be directly related to neglectedness. If R is the number of resources invested in a problem currently, and u(r) is the difference in world utility from investing 0 v.s. r resources into the problem, and u_total is u(r) once the problem is solved, then tractability turns out to be:

Tractability = u'(R) * R * 1/ u_total

So I'm not sure I really understand yet why tractability wouldn't change much with neglectedness. I have preliminary understanding, though, which I'm writing up in another comment.

3Jack R3mo
Ah, I see now that within a problem, tractability shouldn't change as the problem gets less neglected if you assume that u(r) is logarithmic, since then the derivative is like 1/R, making tractability like 1/u_total
Most problems fall within a 100x tractability range (under certain assumptions)

each additional doubling will solve a similar fraction of the problem, in expectation

Aren't you assuming the conclusion here?

2Thomas Kwa3mo
I don't think so. I'd say I'm assuming something (tractability doesn't change much with neglectedness) which implies the conclusion (between problems, tractability doesn't vary by more than ~100x). Tell me if there's something obvious I'm missing.
The future is good "in expectation"

As a note, it's only ever the case that something is good "in expectation" from a particular person's point of view or from a particular epistemic state. It's possible for someone to disagree with me because they know different facts about the world, and so for instance think that different futures are more or less likely. 

In other words, the expected value referred to by the term "expectation" is subtly an expected value conditioned on a particular set of beliefs.

FTX/CEA - show us your numbers!

I disagree with your reasons for downvoting the post, since I generally judge posts on their content, but I do appreciate your transparency here and found it interesting to see that you disliked a post for these reasons. I’m tempted to upvote your comment, though that feels weird since I disagree with it

Free-spending EA might be a big problem for optics and epistemics

Because of Evan's comment, I think that the signaling consideration here is another example of the following pattern:

Someone suggests we stop (or limit) doing X because of what we might signal by doing X, even though we think X is correct. But this person is somewhat blind to the negative signaling effects of not living up to our own stated ideals (i.e. having integrity). It turns out that some more rationalist-type people report that they would be put off by this lack of honesty and integrity (speculation: perhaps because these types have an automatic nor... (read more)

How to become an AI safety researcher

Maybe someone should compile a bunch of exercises that train the muscle of formalizing intuitions

1anakaryan1mo
Strongly second this^
Free-spending EA might be a big problem for optics and epistemics

FWIW, Chris didn't say what you seem to be claiming he said

If you ever feel bad about EA social status, try this

Oh, interesting, thanks for this.

I think before assuming you made a mistake you could add the question of "if someone did that thing to me, could I easily forgive them?" If the answer is yes, then maybe don't sweat it because generally we think of ourselves way more than we think others do[1]

I really like this advice, and I just realized I use this trick sometimes.

The Vultures Are Circling

I might make it clearer that your bullet points are what you recommend people not do. I was skimming and at first and was close to taking away the opposite of what you intended.

Good practices for changing minds

I might add something to the tune of "have them lead the conversation by letting their questions and vague feelings do the steering"

Community builders should learn product development models

Thank you Peter! Definitely taking a look at the books and resources. Also, I now link your comment in the tldr of the post :)

I feel anxious that there is all this money around. Let's talk about it

I have seen little evidence that FTX Future Fund (FFF) or EA Infrastructure Fund (EAIF) have lowered their standards for mainline grants

FFF is new, so that shouldn't be a surprise.

Companies with the most EAs and those with the biggest potential for new Workplace Groups

I’d be curious to see how many people each of these companies employ + the % of employees which are EAs

23 career choice heuristics

[First comment was written without reading the rest of your comment. This is in reply to the rest.]

Re: whether a company adds intrinsic value, I agree, it isn't necessarily counterfactually good, but also that's sort of the point of a heuristic -- most likely you can think of cases where all of these heuristics fail; by prescribing a heuristic, I don't mean to say the heuristic always holds, instead just that using the heuristic v.s. not happens to, on average, lead to better outcomes.

Serial entrepreneur seems to also be a decent heuristic.

 

3mikbp5mo
"I don't mean to say the heuristic always holds" I understand that, I'm not going that way. "on average, lead to better outcomes" That's what in this case I don't see. Starting a company entails a large opportunity cost --you can basically not do anything else for a period of time-- coupled with a large chance of failing. My intuition is that, as a general advice, it may well be net negative, at least as personal advise. Now I see that it may well not be net negative in the aggregate if the successful instances more than compensate the failures, so it may be a good community heuristic. Was that your idea? When I read the post, I interpreted this list as heuristics addressed to individuals, not community heuristics.
23 career choice heuristics

I haven't thought about it deeply, but the main thing I was thinking here was that I think founders get the plurality of credit for the output of a company, partly because I just intuitively believe this, and partly because, apparently, not many people found things. This is an empirical claim, and it could be false e.g. in worlds where everyone tries to be a founder, and companies never grow, but my guess is that the EA community is not in that world. So this heuristic tracks (to some degree) high counterfactual impact/neglectedness.

23 career choice heuristics

This heuristic is meant to be a way of finding good opportunities to learn (which is a way to invest in yourself to improve your future impact) and it’s not meant to be perfect.

1Ines6mo
Gotcha
Some thoughts on vegetarianism and veganism

I'm still not very convinced of your original point, though -- when I simulate myself becoming non-vegan, I don't imagine this counterfactually causing me to lose my concern for animals (nor does it seem like it would harm my epistemics? Though not sure if I trust my inner sim here. It does seem like that,  if anything, going non-vegan would help my epistemics, since, in my case, being vegan wastes enough time such that it is harmful for future generations to be vegan, and by continuing to be vegan I am choosing to ignore that fact).

7Jonas Vollmer6mo
Yeah, as I tried to explain above (perhaps it was too implicit), I think it probably matters much more whether you went vegan at some point in your life than whether you're vegan right now. I don't feel confident in this; I wanted to mainly offer it as a hypothesis that could be tested further. I also mentioned the existence of crappy papers that support my perspective (you can probably find them in 5 minutes on Google Scholar). If people thought this was important, they could investigate this more. I'll tap out of the conversation now – don't feel like I have time to discuss further, sorry.
Some thoughts on vegetarianism and veganism

it would make me deeply sad and upset

That makes sense, yeah. And I could see this being costly enough such that it's best to continue avoiding meat.

I'm still not very convinced of your original point, though -- when I simulate myself becoming non-vegan, I don't imagine this counterfactually causing me to lose my concern for animals (nor does it seem like it would harm my epistemics? Though not sure if I trust my inner sim here. It does seem like that,  if anything, going non-vegan would help my epistemics, since, in my case, being vegan wastes enough time such that it is harmful for future generations to be vegan, and by continuing to be vegan I am choosing to ignore that fact).

Some thoughts on vegetarianism and veganism

No--and when I wrote it, I meant to direct it at anyone involved in the comments discussion. I probably should have made that clearer in the comment. Also, I probably should have read all of the comments before commenting (e.g. are you referring to some comment thread that it seemed like I was replying to?), but am time-limited.

Also, for more context, I wrote this comment because I felt concerned about bottom-line/motivated reasoning causing people to apply the sorts of arguments for action that they don't apply elsewhere to argue for veganism, and I felt ... (read more)

Load More