All of Aaron Bergman's Comments + Replies

The importance of getting digital consciousness right

Not sure how I missed this, but great post and this seems super important and relatively neglected. In case others think it would be worth coining a term for this specifically, I proposed "p-risk" after "p-zombies," in a Tweet (might find later) a few months back.

More substantively, though, I think the greater potential concern is false-positives on consciousness, not false negatives. The latter (attributing conscious experience to zombies) would be tragic, but not nearly as tragic as causing astronomical suffering in digital agents that we don't regard as moral patients because they don't act or behave like humans or other animals.

Half-baked ideas thread (EA / AI Safety)

In direct violation of the instruction to put ideas in distinct comments, here's a list of ideas most of which are so underbaked they're basically raw: 

Meta/infrastructure

  • Buy a hotel/condo/appt building (maybe the Travelodge?) in Berkeley and turn it into an EA Hotel
  • Offer to buy EAs no-doctor-required blood tests like this one that test for common productivity-hampering issues (e.g. b12 deficiency, anemia, hypothyroidism)
  • Figure out how to put to good use some greater proportion of the approximately 1 Billion recent college grads who want to work at an
... (read more)
Half-baked ideas thread (EA / AI Safety)

From a Twitter thread a few days ago (lightly edited/formatted), with plenty of criticism in the replies there

Probably batshit crazy but also maybe not-terrible megaproject idea: build a nuclear reactor solely/mainly to supply safety-friendly ML orgs with unlimited-ish free electricity to train models 

Looks like GPT-3 took something like $5M to train, and this recent 80k episode really drives home that energy cost is a big limiting factor for labs and a reason why only OpenAI/Deepmind are on the cutting edge

In 2017, the smallest active U.S. nuclear re

... (read more)
Half-baked ideas thread (EA / AI Safety)

This was a top-level LW post from a few days ago aptly titled "Half-baked alignment idea: training to generalize" (that didn't get a ton of attention):

Thanks to Peter Barnett and Justis Mills for feedback on a draft of this post. It was inspired by Eliezer's Lethalities post and Zvi's response.

Central idea: can we train AI to generalize out of distribution

I'm thinking, for example, of an algorithm like the following:

  1. Train a GPT-like ML system to predict the next word given a string of text only using, say, grade school-level writ
... (read more)
Stuff I buy and use: a listicle to boost your consumer surplus and productivity

You're welcome, and likewise!

 And just to clarify, there's a huge black box in my mind between "inflammation decreases" and "depressive symptoms decrease." I have no idea what the mechanisms are there!

Stuff I buy and use: a listicle to boost your consumer surplus and productivity

Thanks, updating slightly in the direction of "not effective" but not a ton, mostly because I have a pretty high prior on anything that causally reduces systemic inflammation to be effective for depression. It would be quite surprising to me if at least one of the following is true:

  1. Reducing abnormally high systemic inflammation in depressed people doesn't improve symptoms 
  2. EPA supplementation among depressed people doesn't on average decrease levels of systemic inflammation
  3. EPA supplementation causes depressive symptoms in enough people to offset the effect  described in (1) and (2)
7Benjamin Stewart1mo
Understandable, and thanks for laying your thinking. I think you're right in that we differ due to our priors. I'm probably more sceptical of mechanistic grounding of claims in medicine, in general. This is partly due to my experience that there's a poor correlation between how well medical interventions work in reality, and how well they should work given a seemingly strong mechanistic case. It's probably also due to a scepticism I have in general, which I haven't really justified. Thanks for communicating so clearly - it helped me understand where my impressions were coming from, and where we disagree!
Stuff I buy and use: a listicle to boost your consumer surplus and productivity

Thanks for pointing this out. I should--and plan to--look into this more by checking out the individual studies used in the Cochrane review. Worth noting that the review was of antioxidant supplementation (vits A, E, C, selenium)  in particular rather than a multivitamin per se. 

I wouldn't be shocked if any physiological harm can be traced to Vit A and E supplementation in excess of, say, 300% of RDA. It is a little concerning that the one I recommended contains 170% and 130% respectively. 

Also, speaking for myself alone, I think I'd be will... (read more)

Stuff I buy and use: a listicle to boost your consumer surplus and productivity

Honestly I don't have a great answer here other than my overall impression/intuition is that it's probably bad to take arbitrarily high doses of these (unlike water soluble vitamins, at least out of some sort of precautionary principle) and I recall seeing some anecdotes from others saying that they actively prefer only taking one or the other. 

I don't think there's anything necessarily wrong with taking both (say, 1g per day of each) though

1Emrik17d
So, based on my own understanding of the model here, wouldn't it make more sense to take ~1g/1g of each, considering diminishing returns for marginally more of each? On the other hand, maybe EPA and DHA share an enzyme/receptor/pathway through the BBB (blood-brain barrier; or a shared bottleneck elsewhere) such that it's the ratio that determines how much of each actually gets through. In that case, we'd see inversely correlated absorption after a shared bottleneck is hit. This study [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4772061/] says I don't know to what extent supplemental DHA remains unesterified, but if it does, then its absorption is unlikely to compete much with EPA anyway. And this study [https://pubmed.ncbi.nlm.nih.gov/19442696/] says large amounts (for mice) of DHA and EPA readily pass through the BBB, hence no shared bottleneck there. But this study [https://sci-hub.se/https://doi.org/10.1093/ajcn/nqz097] very weakly hints at a possible shared bottleneck in the stomach, if I understand it vaguely right, but they gave 3g/d of EPA or DHA to two different groups. And 3g is higher than what we're considering, so the shared bottleneck might not apply for our doses. This can all be further complicated if there's a shared bottleneck between omega-6 fatty acids and EPA/DHA, just like there is for ALA[1] [#fn3cvjgaogoyw]. In conclusion, there might or might not be a shared bottleneck. Who knows. I'm still inclined to go for something that's either ~1:1 or EPA-heavy given that EPA->DHA conversion seems high throughput [https://sci-hub.se/https://doi.org/10.1093/ajcn/nqz097]. 1. ^ [#fnref3cvjgaogoyw]Bonus question: If you don't consume much less omega-6 than average, will you get enough DHA with just ALA?
Stuff I buy and use: a listicle to boost your consumer surplus and productivity

I think we may not disagree; I was focusing on their impact on mental health in particular, whereas most studies, including the Cochrane one, look only at physiological outcomes. From Examine

And I combine this with this meta-analysis suggesting that EPA is more responsible for this antidepressant effect.

I haven't looked into claims around heart health or mortality, so am agnostic there for now

2Benjamin Stewart1mo
Fair, and I think the literature is really mixed, which you note elsewhere. For what it's worth, this [https://www.cochrane.org/CD004692/DEPRESSN_omega-3-fatty-acids-depression-adults] recent Cochrane looked at omega-3 supplementation for depression and found a small-to-modest positive effect of Omega-3 supplementation. However, this effect was too small to be clinically significant, and the certainty was 'low to very low'. Sensitivity analysis according to whether the studies were EPA-only or predominately EPA didn't change these conclusions, except that the effect size actually got smaller compared to the effect derived from all studies.

Yes - this fits within our GHW portfolio. From the FAQ page:

Can I write about non-human animals?

Yes. Open Philanthropy is a major funder of work to improve farm animal welfare. If you want to write about a potential new cause area where the primary beneficiaries are non-human animals, please use the open prompt.

How much current animal suffering does longtermism let us ignore?

I'm not intending to, although it's possible I'm using the term "opportunity cost" incorrectly or in a different way than you. The opportunity cost of giving a dollar to animal welfare is indeed whatever that dollar could have bought in the longtermist space (or whatever else you think is the next best option). 

However, it seems to me that at least some parts of longtermist EA , some of the time, to some extent, disregard the animal suffering opportunity cost almost entirely. Surely the same error is committed in the opposite direction by hardcore animal advocates, but the asymmetry comes from the fact that this latter group controls a way smaller share of financial pie. 

8Jack Malde2mo
I'm not sure how you come to this conclusion, or even what it would mean to "disregard the opportunity cost". Longtermist EAs generally know their money could go towards reducing animal suffering and do good. They know and generally acknowledge that there is an opportunity cost of giving to longtermist causes. They simply think their money could do the most good if given to longtermist causes.
How much current animal suffering does longtermism let us ignore?

Related to the funding point (note 4): 

It seems important to remember that even if high status (for lack of a more neutrally-valenced term) longtermist interventions like AI safety aren't currently "funding constrained," animal welfare at large most definitely is. As just one clear example, an ACE report from few months ago estimated that Faunalytics has room for more than $1m in funding. 

That means there remains a very high (in absolute terms) opportunity cost to longtermist spending, because each dollar spent is one not being donated to an anim... (read more)

3Jack Malde2mo
If you buy into the longtermist thesis why are you privileging the opportunity cost of giving to longtermist causes and not the opportunity cost of giving to animal welfare? Are you simply saying you think the marginal value of more money to animal welfare is greater than to longtermist causes?
How about we don't all get COVID in London?

You're right I didn't make a full, airtight argument, and that severity of infection is indeed a crucial consideration. My extremely unqualified impression is that: 

  • Long covid is real but no longer the main source of expected disvalue for the 3x-vax'd
  • A non-trivial number of 3x-vax'd people (20%?) who catch covid lose more than half productivity and/or quality of life for 4-21 days, and this is where most of the expected disvalue comes from

This is what my brain has decided on after being exposed to a bunch of unstructured information so the error bars are very large, and I should probably update toward your POV

When to get off the train to crazy town?

Taking the Boltzmann brain example, isn't the issue that the premises that would lead to such a conclusion are incorrect, rather than the conclusion being "crazy" per se?

In many cases in philosophy, if we are honest with ourselves, we find that the reason we think the premises are incorrect is that we think the conclusion is crazy. We were perfectly happy to accept those premises until we learned what conclusions could be drawn from them.

aaronb50's Shortform

Effective Altruism Georgetown will be interviewing Rob Wiblin for our inaugural podcast episode this Friday! What should we ask him? 

The unthinkable urgency of suffering

You're welcome and thanks for the comment. I too want to preserve what is good, but I can't help but think that EAs tend to focus too much on preserving the good instead of reducing the bad, in large part because we tend to be relatively wealthy, privileged humans who rarely if ever undergo terrible suffering. 

The unthinkable urgency of suffering

Yes, I believe things would change a lot. Hopefully we can find some way to induce this kind of cognitive empathy without making people actually suffer for first hand experience.

The unthinkable urgency of suffering

Yes, this was a bit puzzling for me. Good to see it redeemed a bit. I could see the post being disliked for a few reasons:

  • An image of EA as focused on suffering might be bad for the movement
  • It's preaching to the choir (which it definitely is)

Anyway, thanks for the reassuring comment!

Should Effective Altruists Focus More on Movement Building?

Thanks for all those references. Don't know how I missed the 80,000 page on the topic, but that's a pretty big strike against it being ignored. Regarding your second point, I largely agree but there are surely some MB interventions that don't require full-time generalists. For example, message testing and advertising (I assume) can be mostly outsourced with enough money. 

Should Effective Altruists Focus More on Movement Building?

Thanks so much for the feedback - just edited with the improved formatting. Regarding your thoughts:

  • Point well taken that MB likely receives a higher proportion of hours. However, it still seems plausible that its share of hours is too low; there a lot of people with full time positions dedicated to direct work (though insofar as these people are earning a salary for themselves that they'd have to earn in some position, not all of this time can be thought of as being spent on an EA cause unless we discount their salary from the 'donation' side of things).
... (read more)
EA Forum Writing Workshop on Monday

Will the Zoom be recorded for those of us unable to join live? If so, would you be willing to post the link as a comment under this post?

4Aaron Gertler2y
That's a good idea! I'll check with the organizer and see if that can be arranged.
Reducing long-term risks from malevolent actors

Another type of intervention that could plausibly reduce the influence of malevolent actors is to decrease intergenerational transfer of wealth and power. If competent malevolence both (i) increases one's capacity to gain wealth and/or power and (ii) is heritable, then we should expect malevolent families amass increasing wealth and power. This could be one reason that the global shift away from hereditary monarchies is associated with global peace (I sense that both of these things are correct, but am not positive).

For example, North Korea's Ki... (read more)