All of Milan_Griffes's Comments + Replies

Who's hiring? (May-September 2022)

Decentralized / less gatekept, postings can be voted on, more ability to customize contact info / next steps. 

(Nothing against the 80k board, which is also a valuable service.)

Who's hiring? (May-September 2022)

Just want to say that I think threads like this are a beautiful addition to the Forum and a real step up from the 80k jobs board.

I'm really surprised to read this, I think the 80k jobs board is awesome!

In what ways do you think this format is an improvement?

Milan Griffes on EA blindspots

Some back-and-forth on this between Eliezer & me in this thread.

Milan Griffes on EA blindspots

Compare the number of steps required for an agent to initiate the launch of existing missiles to the number of steps required for an agent to build & use a missile-launching infrastructure de novo.

4Jack Malde5mo
Not sure why number of steps is important. If we're talking about very powerful unaligned AI it's going to wreak havoc in any case. From a longtermist point of view it doesn't matter if it takes it a day, a month, or a year to do so.
Milan Griffes on EA blindspots

Here's Ben Hoffman on burnout & building community institutions: Humans need places

Milan Griffes on EA blindspots

This is the Ben Hoffman essay I had in mind: Against responsibility

(I'm more confused about his EA is self-recommending

Milan Griffes on EA blindspots

This orientation resonates with me too fwiw. 

Milan Griffes on EA blindspots

Existing nuclear weapon infrastructure, especially ICBMs, could be manipulated by a powerful AI to further its goals (which may well be orthogonal to our goals).

Smart things are not dangerous because they have access to human-built legacy nukes.  Smart things are dangerous because they are smarter than you. 

I expect that the most efficient way to kill everyone is via the biotech->nanotech->tiny diamondoid bacteria hopping the jetstream and replicating using CHON and sunlight->everybody falling over dead 3 days after it gets smart.  I don't expect it would use nukes if they were there.

Smart AIs are not dangerous because somebody built guns for them, smart AIs are not dangerous because cars a... (read more)

2Steven Byrnes5mo
I agree that the current nuclear weapon situation makes AI catastrophe more likely on the margin, and said as much here [https://www.lesswrong.com/posts/4basF9w9jaPZpoC8R/intro-to-brain-like-agi-safety-1-what-s-the-problem-and-why#1_6_Why_are_AGI_accidents_such_a_big_deal_] (The paragraph "You might reply: The thing that went wrong in this scenario is not the out-of-control AGI, it’s the fact that humanity is too vulnerable! And my response is: Why can’t it be both? ...") That said, I do think the nuclear situation is a rather small effect (on AI risk specifically), in that there are many different paths for an intelligent motivated agent to cause chaos and destruction. Even if triggering nuclear war is the lowest-hanging fruit for a hypothetical future AGI aspiring to destroy humanity (it might or might not be, I dunno), I think other fruits are hanging only slightly higher, like causing extended blackouts, arranging for the release of bio-engineered plagues, triggering non-nuclear great power war (if someday nuclear weapons are eliminated), mass spearphishing / hacking, mass targeted disinformation, etc., even leaving aside more exotic things like nanobots. Solving all these problems would be that much harder (still worth trying!), and anyway we need to solve AI alignment one way or the other, IMO. :)
5Jack Malde5mo
Sure, but I'm not sure this particular argument means working on nuclear safety is as important as working on AI. We could get rid of all nuclear weapons and a powerful AI could just remake them, or make far worse weapons that we can't even conceive of now. Unless we destroy absolutely everything I'm sure a powerful unaligned AI will be able to wreak havoc, and the best way to prevent that seems to me to ensure AI is aligned in the first place!
2EdoArad5mo
Gotcha, thanks!
The Future Fund’s Project Ideas Competition

Researching valence for AI alignment

Artificial Intelligence, Values and Reflective Processes

In psychology, valence refers to the attractiveness, neutrality, or aversiveness of subjective experience. Improving our understanding of valence and its principal components could have large implications for how we approach AI alignment. For example, determining the extent to which valence is an intrinsic property of reality could provide computer-legible targets to align AI towards. This could be investigated experimentally: the relationship between experiences and their neural correlates & subjective reports could be mapped out across a large sample of subjects and cultural contexts.

6Greg_Colbourn5mo
I've been wondering whether AGI independently discovering valence realism could be a "get out clause" for alignment. Maybe this could even happen in a convergent manner with natural abstraction [https://www.alignmentforum.org/posts/Nwgdq6kHke5LY692J/alignment-by-default#Unsupervised__Natural_Abstractions] ?
The Future Fund’s Project Ideas Competition

Nuclear arms reduction to lower AI risk

Artificial Intelligence and Great Power Relations

In addition to being an existential risk in their own right, the continued existence of large numbers of launch-ready nuclear weapons also bears on risks from transformative AI. Existing launch-ready nuclear weapon systems could be manipulated or leveraged by a powerful AI to further its goals if it decided to behave adversarially towards humans. We think understanding the dynamics of and policy responses to this topic are under-researched and would benefit from further investigation.

4aogara5mo
Strongly agree with this. There are only a handful of weapons that threaten catastrophe to Earth’s population of 8 billion. When we think about how AI could cause an existential catastrophe, our first impulse shouldn’t be to think of “new weapons we can’t even imagine yet”. We should secure ourselves against the known credible existential threats first. Wrote up some thoughts about doing this as a career path here: https://forum.effectivealtruism.org/posts/7ZZpWPq5iqkLMmt25/aidan-o-gara-s-shortform?commentId=rnM3FAHtBpymBsdT7 [https://forum.effectivealtruism.org/posts/7ZZpWPq5iqkLMmt25/aidan-o-gara-s-shortform?commentId=rnM3FAHtBpymBsdT7]
2Greg_Colbourn5mo
On the flip side, you could make part of your 'pivotal act [https://intelligence.org/late-2021-miri-conversations/#:~:text=and%20Richard%20discuss%20%E2%80%9C-,pivotal%20acts,-%E2%80%9D%20%E2%80%94%20in%20particular%2C%20actions] ' be the neutralisation of all nuclear weapons [https://www.newscientist.com/article/dn3734-neutrino-beam-could-neutralise-nuclear-bombs/#:~:text=A%20super%2Dpowered%20neutrino%20generator,neutrinos%20straight%20through%20the%20Earth.] .
The Future Fund’s Project Ideas Competition

Researching the relationship between subjective well-being and political stability

Great Power Relations, Values and Reflective Processes

Early research has found a strong association between a society's political stability and the reported subjective well-being of its population. Political stability appears to be a major existential risk factor. Better understanding this relationship, perhaps by investigating natural experiments and running controlled experiments, could inform our views of appropriate policy-making and intervention points.

The Future Fund’s Project Ideas Competition

High-quality human performance is much more engaging than autogenerated audio, fwiw.

4alexrjl5mo
Hence the original pitch!
AMA: Tim Ferriss, Michael Pollan, and Dr. Matthew W. Johnson on psychedelics research and philanthropy

Thanks for doing this!

For all three – how would you like to see EA participate in the psychedelic renaissance? What do you think a good marriage of the two communities would look like?

Hi Milan! It would be great to see increased discussion of the most attractive target projects related to psychedelics, as well as perhaps donation campaigns to reach critical mass for specific purposes. It's really remarkable how much can be done for how little at the moment. If there is interest, I might be able to help by drafting a blog post with some of the candidates I consider very high-leverage and worthwhile.

I don't think we yet are collectively wise enough to engage in memetic and/or tech projects that undermine evolutionary equilibria, fwiw.

Consciousness research as a cause? [asking for advice]

QRI = the Qualia Research Institute

https://qualiaresearchinstitute.org

New Top EA Causes for 2021?

"All 3,400 hours of Rationality: From AI to Zombies

Speedcore EDM R:A2Z will be the background soundtrack at the Schelling Point Temple of EA Burning Man. 

24/7 baby.

New Top EA Causes for 2021?

Big +1 

An 80k podcast dubstep house party actually sounds like a good time.... BURNING MAN OF THE NERDS!!!!

Robbie Wib-wib-wib-wibibiblin in da HAUS!!!!!!!!

Saying "80k tracks the # of calls and # of career plan changes, but doesn't track the long-run impacts of their advisees" is different from saying "80k focus[es] mainly on # of calls"

Thank you for this feedback. 

From my perspective, I'm writing both for my own sake and for others.

9Jack Malde1y
Even if your intentions are good surely it should be clear at this point that your approach is proving completely ineffective?

Yes, I want people to think about this for themselves. (I don't think that's esoteric.)

I don't have any advice to offer, but as a datapoint for you: I applaud your goal and am even sympathetic to many of your points, but even I found this post actively annoying (unlike your previous ones in this series). It feels like you're writing a series of posts for your own benefit without actually engaging with your audience or interlocutors.  I think this is fine for a personal blog, but does not fit on this forum. 

What about my style stands out as esoteric?

(From my perspective, I'm trying to be as clear & straightforward as possible in the main body of each post. I am also using poetic quotes at the top of some of the posts.)

In this one, it's that there is no main body, just a gesture off-screen. Only a small minority of readers will be familiar enough with the funding apparatus to complete your "exercise to the reader..." Maybe you're writing for that small minority, but it's fair for the rest to get annoyed.

In past ones (from memory), it's again this sense of pushing work onto the reader. Sense of "go work it out".

"Where do you get the impression that they focus mainly on # of calls?"

I don't have this impression. From the original post:

80,000 Hours tracking the number of advising calls they make and the number of career plan changes they catalyze, rather than the long-run impacts their advisees are having in the world.


It would be interesting to see a cohort analysis of 80k advisees by year, looking at what each advisee from each cohort has accomplished out in the world in the following years.

Maybe that already exists? I haven't seen it, if so.

1atlas1y
In the sentence you quoted, you literally state that 80k tracks the # of calls and # of career plan changes, but doesn't track the long-run impacts of their advisees.

"Opening with a strong claim,  making your readers scroll through a lot of introductory text, and ending abruptly with "but I don't feel like justifying my point in any way, so come up with your own arguments" is not a very good look on this forum. "

I wasn't intending the text included in the post to be introductory...


"[I have read the entirety of The Inner Ring, but not the vast series of apparent prerequisite posts to this one. I would be very surprised if reading them caused me to disagree with the points in this comment, though.]"

If you don't want to read the existing work that undergirds this post, why should I expect further writing to change your mind about the topic?

I have read all except one post you linked to. I don't understand how your post related to the two posts about children and would appreciate a comment. I agree with your argument that "EA jobs provide scarce non-monetary goods" and that it is hard to get hired by EA organisations. However, it is unclear to me that any of these posts provide a damaging critique to EA. I would be surprised if anyone managed to create a movement without any of these dynamics. However, I would also be excited to see working tackling these putative problems such as the non-monetary value of different jobs.

Where are all the comments, indeed...


"I advise you to withdraw this post, cut out half the narrative crap, add some evidence and a model, make a recommendation, then repost it."

I think this is basically fair, though from my perspective the narrative crap is doing important work.

I have limited capacity these days so I'm writing this argument as a serial, posting it as I can find the time. 

In the meanwhile, this sequence from a few years ago (a) makes a similar argument following the form you suggest.

"... on the margin, it sounds like we have more cost-effective forms of outreach."

Could you say more about what you have in mind? 

(Asking because I personally don't see any compelling alternative to a substantial fraction of EA folks raising children, especially when I consider a > 20-year time horizon.)

8Jack Malde1y
By the way, Toby Ord weighs in on this at 24:33 in his Global Reconnect interview [https://www.youtube.com/watch?v=9NfUDBEWbjQ&t=1085s&ab_channel=CentreforEffectiveAltruism] . He basically agrees with Michael that having children and raising them as EAs is unlikely to be as cost-effective as spreading EA to existing adults. He also seems to feel somewhat uncomfortable about the idea of raising children as EAs.
6MichaelStJules1y
I'm personally not sure, but this is what I hear from others in this thread and elsewhere. I'd be thinking the EA Community fund, university groups, running EA fellowships, GWWC, TLYCS, EA orgs to take volunteers/interns. Maybe we are close to saturation with the people who would be sympathetic to EA, and we just need to make more people at this point, but I don't think this is the case, since there's still room for more local groups. I've been the primary organizer for the EA club at my university for a couple years, and I think a few of the members would not have been into EA at all or nearly as much without me (no one else would have run it if I didn't when I did, after the previous presidents left the city), but maybe they would have found their way into EA eventually anyway, and there's of course a risk of value drift. This is less work than raising a child (maybe 5-10 hours/week EDIT: or is that similar to raising a child or more? Once they're in school, it might take less work?), has no financial cost, and I made close friends doing it. I think starting a local group where there isn't one (or running an otherwise fairly inactive one) can get you at least one new fairly dedicated EA per year, but I'm not sure how many dedicated EA person-years that actually buys you. How likely is the child of an EA to be an EA in the long run? And does it lead to value drift for the parents?
What grants has Carl Shulman's discretionary fund made?

Thanks for this update – these seem like worthwhile things to invest in!

Do you have a sense of how you will structure reporting on future grantmaking from this fund?

6CarlShulman1y
Not particularly.

There's actually a lot of underutilized real estate in the Bay Area, especially in East Bay, Marin, South Bay, and the Peninsula. 

Much of it is locked up in big old houses that haven't turned over in a long time though.

Why do EAs have children?

"Reproduction is a credible commitment to the future" is a potent meme.

7Ramiro1y
It reminds me (I'll have to share it) this weird sonnet (On fate & future) I drafted (sorry for any lousy rhyme or offense I may have caused to this beautiful language, but I'm not a native speaker) for some friends working with Generation Pledge: [https://www.generationpledge.org/] Unhealing stains, sons to be slain / As it's written: jihad and submission / We let Samsara ourselves drain / While Lord Shiva stated a mission. Mystics, and yet, we don’t believe / For no told miracles anticipate / What brought us luck, skill and fate / The true great wonder we might live: In a century – in History, just a moment – / The length of happiness has grown six-fold / And more than doubled the expected life / Now, let it be your faith and my omen / As their fears and promises grow old / No more be bound to ancestors’ strife.
AMA: Holden Karnofsky @ EA Global: Reconnect

Does this post still basically reflect your feelings about public discourse?

2Milan_Griffes1y
I expanded a bit on this question here [https://forum.effectivealtruism.org/posts/s2DrG9y6JN9WeqEYt/ben-hoffman-and-holden-karnofsky] .

Those weren't corrections... 

The statements I make in the original post are largely about what an org is focusing on, not what it is formally tracking.

1atlas1y
I also downvoted for the same reason. I've looked at 80k's reports pretty closely (bc I was basing our local EA group's metrics on them) and it seemed pretty obvious to me that the counterfactual impact their advisees have is in fact the main thing they try to track & that they use for decisionmaking. I haven't looked into the other orgs as deeply, but your statement about 80k makes me disinclined to believe the rest of the list. Where do you get the impression that they focus mainly on # of calls?

I think there's a lot to admire about the Shakers... I'm just pointing out that as a social movement they are dying out, probably in part due to their views about sex & child-rearing.

Catholicism, Islam, and Mormonism seem to be much more durable in the long run (at least so far).

Thanks, should be fixed now

I bet cost often gets used as an excuse here.

Hmmm... something about making the two commensurable feels weird to me... (not sure what it is about it yet).

There's an important difference in kind here – raising children is a qualitatively different form of "consumption" than other kinds of consumption.

6Stefan_Schubert1y
Of course - I'm not suggesting otherwise. My point is just to say that you can cut other forms of spending as well, just as you can cut spending on raising a child.

Could you give some examples of the basic facts I stated that appear incorrect?

3Kit1y
People from 80k, Founders Pledge and GWWC have already replied with corrections.
Opportunity for EA orgs: $5k/year in ETH (tech setup required)

Wow this is cool: https://blog.fuguefoundation.org/effective-altruism-quest/

5FugueFoundation1y
Thanks, I cross posted it in this forum as well several months ago, though the subject was not particularly well received. When ETH gas fees drop a bit in the coming months (due to various network upgrades) we may set up some bounties to essentially pay people (in stable coins) that successfully complete the quest (i.e., proving they read/understood the intro to EA article). https://forum.effectivealtruism.org/posts/sDtfchRXqKYACJqdm/effective-altruism-quest [https://forum.effectivealtruism.org/posts/sDtfchRXqKYACJqdm/effective-altruism-quest]
Why do EAs have children?

I'm planning to have children because I feel excited about the aesthetic of parenthood, it seems wonderful to be able to intimately participate in bringing more life into the world, and many people I respect endorse becoming a parent (1, 2, 3, but the list goes on and on...)

I'm basically trying to wonder about whether or not most people who affiliate with EA share your preference set about this.

That is a very worthwhile question, but invoking Shakerism is likely to obfuscate the process of answering it.

Load More