All of Rocket's Comments + Replies

Thanks, Agustín! This is great.

Please submit more concrete ones! I added "poetic" and "super abstract" as an advantage and disadvantage for fire.

If the organization chooses to directly support the new researcher, then the net value depends on how much better their project is than the next-most-valuable project.

This is nit-picky, but if the new researcher proposes, say, the best project the org could support, it does not necessarily mean the org cannot support the second-best project (the "next-most-valuable project"), but it might mean that the sixth-best project becomes the seventh-best project, which the org then cannot support. 

In general, adding a new project to the pool of projects does n... (read more)

2
Sam Holton
3mo
I completely agree! The definition of marginal is somewhat ambiguous the way I've written it. What I mean to say is that the marginal project is the one that is close to the funding bar, like you pointed out.

Update: We have finalized our selection of mentors.

I'll be looking forward to hearing more about your work on whistleblowing! I've heard some promising takes about this direction. Strikes me as broadly good and currently neglected.

Rocket
7mo35
7
3
10

I'm cringing so hard already fr

(Take my strong upvote, I think people downvoting you don't realize you are the author of the post haha)

Thanks for such a thorough response! I am also curious to hear Oscar's answer :)

1
OscarD
8mo
Ah sorry I replied to the parent comment - we only gave feedback to people who requested it. From memory people rejected at the interview stage were told they could request feedback if they wanted, while people rejected before the interview stage were not told this, but sometimes requested and were given short feedback anyway.

When applicants requested feedback, did they do that in the application or by reaching out after receiving a rejection?

2
Joseph Lemien
8mo
For the Animal Advocacy Careers scenario, I think the feedback was provided to everyone who was rejected, but I'm not sure about that. I'd estimate maybe a 30% chance that I am wrong. For my idea about including a checkbox that allows applicants to opt in to feedback, I haven't put much thought into the specifics about how giving feedback would work. These are rough an unpolished ideas, but I'll do some spitballing: * Everyone who fills out an application form is prompted to select whether they would like feedback in case of rejection. * People who are rejected and then reach out to request feedback are usually given feedback, unless we have some specific reason to not give the feedback. * The feedback itself should lean toward being useful for the applicant. Thus, rather than saying "you didn't demonstrate strong excel skills in the interview," something more like "you didn't demonstrate strong excel skills in the interview, and here are some links for resources that are good for learning how to do excel at an intermediate/advanced level." * People who reach the later stages of the application process and then  are rejected are actively asked if they would like to get feedback from the organization. * The farther someone gets in the process the more likely they are to get feedback. * The farther someone gets in the process the more detailed and useful the feedback is. * I haven't thought much about legal risk, which is a very big area that I want addressed before implementing this.

Is that lognormal distribution responsible for 

the cost-effectiveness is non-linearly related to speed-up time.

If yes, what's the intuition behind this distribution? If not, why is cost-effectiveness non-linear in speed-up time?

Something I found especially troubling when applying to many EA jobs is the sense that I am p-hacking my way in. Perhaps I am never the best candidate, but the hiring process is sufficiently noisy that I can expect to be hired somewhere if I apply to enough places. This feels like I am deceiving the organizations that I believe in and misallocating the community's resources. 

There might be some truth in this, but it's easy to take the idea too far. I like to remind myself:

  1. The process is so noisy! A lot of the time the best applicant doesn't get the jo
... (read more)

Thanks for the references! Looking forward to reading :)

Thanks for this! I would still be interested to see estimates of eg mice per acre in forests vs farms and I'm not sure yet whether this deforestation effect is reversible. I'll follow up if I come across anything like that.

I agree that the quality of life question is thornier.

5
Vasco Grilo
10mo
You are welcome! Brian Tomasik has some estimates. For rainforest, the density of wild terrestrial arthropods as a fraction of the global mean is 1.02 to 5 (95 % confidence interval). For Cerrado, which is a proxy for farmland, it is 0.70 to 3.00. I fitted lognormal distributions to these values, and got that the expected value for the density of wild terrestrial arthropods in rainforests is 72.4 % (= 1.55/0.899 - 1) higher than that for Cerrado. Overall, it looks like there are more wild terrestrial arthropods in forests, but it is not super clear, judging from the overlap between the 95 % confidence intervals. Yes, I do not know either. From the point of view of resilience against ASRSs, it would be good if abandoned farmland remained deforested such that it could quickly start being used if needed.

Under CP and CKR, Zuckerberg would have given higher credence to AI risk purely on observing Yudkowsky’s higher credence, and/or Yudkowsky would have given higher credence to AI risk purely on observing Zuckerberg’s lower credence, until they agreed.

 

Should that say lower, instead?

3
trammell
11mo
It should, thanks! Fixed

Decreasing the production of animal feed, and therefore reducing crop area, which tends to: Increase the population of wild animals

 

Could you share the source for this? I've wondered about the empirics here. Farms do support wild animals (mice, birds, insects, etc), and there is precedent for farms being paved over when they shut down, which prevents the land from being rewilded. 

2
Vasco Grilo
11mo
Thanks for asking! 70 % to 80 % of deforestation is driven by conversion of primary forest to agriculture or tree plantations: Since more agriculture tends to result in more deforestation, I guess less agriculture leads to less/more deforestation/forestation. In any case, I do not know whether terrestrial arthropods have good or bad lives, so regardless of whether their population would increase or decrease due to greater consumption of animals, I would not be able to tell whether the effect was good or bad.

Suppose someone is an ethical realist: the One True Morality is out there, somewhere, for us to discover. Is it likely that AGI will be able to reason its way to finding it? 

What are the best examples of AI behavior we have seen where a model does something "unreasonable" to further its goals? Hallucinating citations?

What are the arguments for why someone should work in AI safety over wild animal welfare? (Holding constant personal fit etc)

  • If someone thinks wild animals live positive lives, is it reasonable to think that AI doom would mean human extinction but maintain ecosystems? Or does AI doom threaten animals as well?
  • Does anyone have BOTECs on numbers of wild animals vs numbers of digital minds?

At least we can have some confidence in the total weight of meat consumed on average by a Zambian per year and the life expectancy at birth in Zambia.

 

We should also think about these on the margin. Ie the lives averted might have been shorter than average and consumed less meat than average.

2
Hank_B
1y
That's true! Maybe the potential human would have been born to poorer than average parents (because those are the people who need help accessing contraception), thus being poorer on average (and so consuming less meat). Or maybe the potential human would be born to more educated on average parents (since those are the people who'd be interested in using contraception?)? Thus being richer on average and eating more meat.

I imagine a proof (by contradiction) would work something like this:

Suppose you place > 1/x probability on your credences moving by a factor of x. Then the expectation of your future beliefs is > prior * x * 1/x = prior, so your credence will increase. With our remaining probability mass, can we anticipate some evidence in the other direction, such that our beliefs still satisfy conservation of expected evidence? The lowest our credence can go is 0, but even if we place our remaining < 1 - 1/x probability on 0, we would still find future beliefs &... (read more)

I would endorse all of this based on experience leading EA fellowships for college students! These are good principles not just for public media discussions, but also for talking to peers.

Yay! Where in the Bay are you located?

Answer by RocketApr 20, 20232
0
0

THB that EA-minded college freshmen should study Computer Science over Biology

Thanks for the thorough response! I agree with a lot of what you wrote, especially the third section on Epistemic Learned Helplessness: "Bayesianism + EUM, but only when I feel like it" is not a justification in any meaningful sense.

On Priors

I agree that we can construct thought experiments (Pascal's Mugging, acausal trade) with arbitrarily high stakes to swamp commonsense priors (even without religious scenarios or infinite value, which are so contested I think it would be difficult to extract a sociological lesson from them).

On Higher Order Evidence

I sti... (read more)

Thanks for the excellent post!

I think you are right that this might be a norm/heuristic in the community, but in the spirit of a "justificatory story of our epistemic practices," I want to look a little more at 

4. When arguments lead us to conclusions that are both speculative and fanatical, treat this as a sign that something has gone wrong.  

First, I'm not sure that "speculative" is an independent reason that conclusions are discounted, in the sense of a filter that is applied ex-post. In your 15AI thought experiment, for example, I think ... (read more)

1
Violet Hour
1y
Thanks for the comment!  (Fair warning, my response will be quite long) I understand you to be offering two potential stories to justify ‘speculativeness-discounting’.  1. First, EAs don’t (by and large) apply a speculativeness-discount ex post. Instead, there’s a more straightforward ‘Bayesian+EUM’ rationalization of the practice. For instance, the epistemic practice of EAs may be better explained with reference to more common-sense priors, potentially mediated by orthodox biases. 2. Or perhaps EAs do apply a speculativeness-discount ex post. This too can be justified on Bayesian grounds.  1. We often face doubts about our ability to reason through all the relevant considerations, particularly in speculative domains. For this reason, we update on higher-order uncertainty, and implement heuristics which themselves are justified on Bayesian grounds.  In my response, I’ll assume that your attempted rationale for Principle 4 involves justifying the norm with respect to the following two views: * Expected Utility Maximization (EUM) is the optimal decision-procedure. * The relevant probabilities to be used as inputs into our EUM calculation are our subjective credences. The ‘Common Sense Priors’ Story I think your argument in (1) is very unlikely to provide a rationalization of EA practice on ‘Bayesian + EUM’ grounds.[1]  Take Pascal’s Mugging. The stakes can be made high enough that the value involved can easily swamp your common-sense priors. Of course, people have stories for why they shouldn’t give the money to the mugger. But these stories are usually generated because handing over their wallet is judged to be ridiculous, rather than the judgment arising from an independent EU calculation. I think other fanatical cases will be similar. The stakes involved under (e.g.) various religious theories and our ability to acausally affect an infinite amount of value are simply going to be large enough to swamp our initial common-sense priors.  Thus, I think

Copy that. I removed "smash," but I'm leaving the language kind of ambiguous because my understanding of this strategy is that it's not restricted to conventional regulations, but instead will draw on every available tool, including informal channels. 

Thanks for following up and thanks for the references! Definitely agree these statements are evidence; I should have been more precise and said that they're weak evidence / not likely to move your credences in the existence/prevalence of human consciousness.

a very close connection between an entity’s capacity to model its own mental states, and consciousness itself.

 

The 80k episode with David Chalmers includes some discussion of meta-consciousness and the relationship between awareness and awareness of awareness (of awareness of awareness...).  Would recommend to anyone interested in hearing more! 

They make the interesting analogy that we might learn more about God by studying how people think about God than by investigating God itself. Similarly we might learn more about consciousness by investigating how people think about it...

2
rgb
1y
Agree, that's a great pointer! For those interested, here is the paper and here is the podcast episode. [Edited to add a nit-pick: the term 'meta-consciousness' is not used, it's the 'meta-problem of consciousness', which is the problem of explaining why people think and talk the way they do about consciousness]

We trust human self-reports about consciousness, which makes them an indispensable tool for understanding the basis of human consciousness (“I just saw a square flash on the screen”; “I felt that pinprick”).

 

I want to clarify that these are examples of self-reports about consciousness and not evidence of consciousness in humans. A p-zombie would be able to report these stimuli without subjective experience of them. 

They are "indispensable tools for understanding" insofar as we already have a high credence in human consciousness.

2
rgb
1y
Thanks for the comment. A couple replies: Self-report is evidence of consciousness in Bayesian sense (and in common parlance): in a wide range of scenarios, if a human says they are conscious of something, you should have a higher credence than if they do not say they are. And in the scientific sense: it's commonly and appropriately taken as evidence in scientific practice; here is Chalmers's "How Can We Construct a Science of Consciousness?" on the practice of using self-reports to gather data about people's conscious experiences: It's suppose it's true that self-reports can't budge someone from the hypothesis that other actual people are p-zombies, but few people (if any) think that. From the SEP: So yeah: my take is that no one, including anti-physicalists who discuss p-zombies like Chalmers, really thinks that we can't use self-report as evidence, and correctly so.
2
JWS
1y
This is true for literally all empirical evidence if you accept the possibility of a P-Zombie. The only possible falsification for consciousness can come from the internal subject itself, nothing else will do. But for everyone apart from you, it's self-reports, 3rd party observation, or nothing. Edit: What I mean here is that these self-reports are evidence - if they're not then there's no evidence for any minds apart from your own. And therefore we also ought to take AI self-reports as evidence. Not as serious as we take human self-reports at this stage, but evidence nonetheless.
4
Habryka
1y
"Emulated Minds" aka "Mind uploads".
4
Larks
1y
Brain Emulations - basically taking a person and running a simulation of them on a computer, where they could potentially be copied, run faster or slower, etc.

Thanks for this post! I'm wondering what social change efforts you find most promising?

3
MichaelDello
1y
Thanks for the great question. I'd like to see more attempts to get legislation passed to lock in small victories. The Sioux Falls slaughterhouse ban almost passing gives me optimism for this. Although it seemed to be more for NIMBY reasons than for animal rights reasons, in some ways that doesn't matter.  I'm also interested in efforts to maintain the lower levels of speciesism we see in children into their adult lives, and to understand what exactly drives that so we can incorporate it into outreach attempts targeted at adults. Our recent interview with Matti Wilks touches on this a little if you're interested.

Oh I see! Ya, crazy stuff. I liked the attention it paid to the role of foundation funding. I've seen this critique of foundations included in some intro fellowships, so I wonder if it would also especially resonate with leftists who are fed up with cancel culture in light of the Intercept piece.

I don't think anything here attempts a representation of "the situation in leftist orgs" ? But yes lol same

3
Linch
2y
I don't know what you mean by "anything here," but I'm referring to the link that Larks shared.

https://forum.effectivealtruism.org/posts/MCuvxbPKCkwibpcPz/how-to-talk-to-lefties-in-your-intro-fellowship?commentId=YwQme9B2nHoH6fXeo

https://forum.effectivealtruism.org/posts/MCuvxbPKCkwibpcPz/how-to-talk-to-lefties-in-your-intro-fellowship?commentId=YwQme9B2nHoH6fXeo

https://forum.effectivealtruism.org/posts/MCuvxbPKCkwibpcPz/how-to-talk-to-lefties-in-your-intro-fellowship?commentId=YwQme9B2nHoH6fXeo

This is a response to D0TheMath, quinn, and Larks, who all raise some version of this epistemic concern:

(1) Showing how EA is compatible with leftist principles requires being disingenuous about EA ideas —> (2) recruit people who join solely based on framing/language —> (3) people join the community who don't really understand what EA is about —> (4) confusion!

The reason I am not concerned about this line of argumentation is that I don't think it attends to the ways people decide whether to become more involved in EA.

(2) In my experience, people a... (read more)

Ya maybe if your fellows span a broad political spectrum, then you risk alienating some and you have to prioritize. But the way these conversations actually go in my experience is that one fellow raises an objection, eg "I don't trust charities to have the best interests of the people they serve at heart." And then it falls to the facilitator to respond to this objection, eg "yes, PlayPumps illustrates this exact problem, and EA is interested in improving these standards so charities are actually accountable to the people they serve," etc. 

My sense is... (read more)

3
david_reinstein
2y
That makes sense in that context. Still I think that generally bringing in people to EA under the pretence that it is substantially lefty in these ways and accommodating to this style of discourse could have possibly negative consequences. If these people join and use this language in explaining EA to others it might end up turning others off.

I agree with quinn. I'm not sure what the mechanism is by which we end up with lowered epistemic standards. If an intro fellow is the kind of person who weighs reparative obligations very heavily in their moral calculus, then deworming donations may very well satisfy this obligation for them. This is not an argument that motivates me very much, but it may still be a true argument. And making true arguments doesn't seem bad for epistemics? Especially at the point where you might be appealing to people who are already consequentialists, just consequentialists with a developed account of justice that attends to reparative obligations.

Thanks for the reply! I'm satisfied with your answer and appreciate the thought you've put into this area :) I do have a couple follow-ups if you have a chance to share further:

I expected to hear about the value of the connections made at EAG, but I'm not sure how to think about the counterfactual here. Surely some people choose to meet up at EAG but in the absence of the conference would have connected virtually, for example? 

I also wonder about the cause areas of the EA-aligned orgs you cited. Ie, I could imagine longtermist orgs that are more talen... (read more)

2
Eli_Nathan
2y
No problem! I probably won't be able to respond to your later points, just because the answers would be complicated and I'd have to go into a lot of detail re how I think about EAG. But to answer some of your other questions: 1. I don't have concrete data on the counterfactual likelihood of connections, but I expect that it's not that high (very strong confidence that it's <50% of connections). There's no obvious way for many these people to connect virtually, other than attending a virtual EA conference, and I think there are also strong benefits to meeting in person (as well as the possibility of group discussions and meetups). My rough guess would also be that people in general are less interested in virtual conferences than in-person ones, meaning that there are a bunch of counterfactual connections here. 2. The org that said they’d gotten a minimum of $1.25 million worth of value from connections they’ve made at EAG(x)’s was a global health and development org. I don't know exactly who said that they would trade $5 million in donations for the contacts they made at EAGxBoston, but my guess is that this was someone working in a longtermist/x-risk field (someone on my team told me about this feedback, I didn't receive it directly myself).

The second point implies more of a bright line than scalar dynamic, which seems consistent with scope insensitivity over lower donation amounts. That is, we might expect scope insensitivity to equalize the perception of $1m and $5m dollars, but once you hit $10m, then you attract negative media coverage. If we restrict ourselves to donation sizes that allow us to fly under the radar of national media outlets, then the scope insensitivity argument may still bite.

EAG SF Was Too Boujee.

I have no idea what the finances for the event looked like, but I'll assume the best case that CEA at least broke even.

The conference seemed extravagant to me. We don't need so much security or staff walking around to collect our empty cups. How much money was spent to secure an endless flow of wine? There were piles of sweaters left at the end; attendees could opt in with their sizes ahead of time to calibrate the order.

Particularly in light of recent concerns about greater funding, it would behoove us to consider the harms of an opu... (read more)

Hi — I’m Eli from the EA Global team. Thanks for your thoughts on this — appreciate your concerns here. I’ll try to chip in with some context that may be helpful. To address your main underlying point, my take is that EA Globals have incredibly high returns on investment — EA orgs and members of the community report incredibly large amounts of value from our events. For example:

  • An attendee from an EA-aligned org said they would probably trade $5 million in donations for the contacts they made at EAGxBoston.
  • Another EA-aligned org reporting that they’ve gott
... (read more)
6
Annabella Wheatley
2y
I really agree. I think there is large benefits to things being “comfy” eg having good food and snacks, nice areas to sit and socialise etc etc however it makes me feel super icky attending fancy EAGs. (I also don’t know how standard this is for conferences).  Unlimited beverages has got to be unnecessary (and expensive). 

Hi Ann! Congratulations on this excellent piece :)

I want to bring up a portion I disagreed with and then address another section that really struck me. The former is:

Of course, co-benefits only affect the importance of an issue and don’t affect tractability or neglectedness. Therefore, they may not affect marginal cost-effectiveness.

I think I disagree with this for two reasons:

  1. Improving the magnitude of impact while holding tractability and neglectedness constant would increase impact on the margin, ie, if we revise our impact estimates upwards at every po
... (read more)
2
Ann Garth
3y
Hi Rocket, thanks for sharing these thoughts (and I'm sorry it's taken me so long to get back to you)! To respond to your specific points: I certainly agree with this -- was only trying to communicate that increases in importance might not be enough to make climate change more cost-effective on the margin, especially if tractability and neglectedness are low. Certainly that should be evaluated on a case-by-case basis. This is true (and very well-phrased!). I think there's some additional ~ nuance ~ which is that the harms of climate change are scalar, whereas the risks of nuclear war or catastrophic AI seem to be more binary. I'll have to think more about how to talk about that distinction, but it was definitely part of what I was thinking about when I wrote this section of the post.

Why is "people decide to lock in vast nonhuman suffering" an example of failed continuation in the last diagram?

4
MichaelA
4y
Failed continuation is where humanity doesn't go extinct, but (in Ord's phrase) "the destruction of humanity’s longterm potential" still occurs in some other way (and thus there's still an existential catastrophe).  And "destruction of humanity's longterm potential" in turn essentially means "preventing the possibility of humanity ever bringing into existence something close to the best possible future". (Thus, existential risks are not just about humanity.) It's conceivable that vast nonhuman suffering could be a feature of even the best possible future, partly because both "vast" and "suffering" are vague terms. But I mean something like astronomical amounts of suffering among moral patients. (I hadn't really noticed that the phrase I used in the diagram didn't actually make that clear.) And it seems to me quite likely that a future containing that is not close to the best possible future.  Thus, it seems to me likely that locking in such a feature of the future is tantamount to preventing us ever achieving something close to the best future possible.  Does that address your question? (Which is a fair question, in part because it turns out my language wasn't especially precise.)  ETA: I'm also imagining that this scenario does not involve (premature) human extinction, which is another thing I hadn't made explicit.