All of Daniel_Eth's Comments + Replies

I'm pretty confident that people who prioritize their health or enjoyment of food over animal welfare can moral handshake with animal suffering vegans by tabooing poultry at the expense of beef.


Generally disagree, because the meat eaters don't get anything out of this agreement. "We'll both agree to eat beef but not poultry" doesn't benefit the meat eater. The one major possible exception imho is people in relationships – I could image a couple where one person is vegan and the other is a meat eater where they decide both doing this is a pareto-improvement.

2
quinn
4mo
While I think the fuzzies from cooperating with your vegan friends should be considered rewarding, I know what you mean--- it's not a satisfying moral handshake if it relies on a foundation of friendship! 

I think it is worth at least a few hours of every person's time to help people during a war and humanitarian crisis. 

 

I don't think this is true, and I don't see an a priori reason to expect cause prioritization research to result in that conclusion. I also find it a little weird how often people make this sort of generalized argument for focusing on this particular conflict, when such a generalized statement should apply equally well to many more conflicts that are much more neglected and lower salience but where people rarely make this sort of argument (it feels like some sort of selective invocation of a generalizable principle).

1
bluebird27
4mo
That makes sense, I now think I made this statement too general. In the case of the Israel-Hamas war, how would you decide if it's worth a few hours to help? 

My personal view is that being an EA implies spending some significant portion of your efforts being (or aspiring to be) particularly effective in your altruism, but it doesn't by any means demand you spend all your efforts doing so. I'd seriously worry about the movement if there was some expectation that EAs devote themselves completely to EA projects and neglect things like self-care and personal connections (even if there was an exception for self-care & connections insofar as they help one be more effective in their altruism).

It sounds like you de... (read more)

IIRC it took me about a minute or two. But I already had high context and knew how I wanted to vote, so after getting oriented I didn't have to spend time learning more or thinking through tradeoffs.

I am curious to know how many Americans were consulted about the decision to spend about $10,000 per tax-payer on upgrading nuclear weapons... surely this is a decision that American voters should have been deeply involved in, given that it impacts both their taxes and their chance of being obliterated in a nuclear apocalypse. 

I think there's a debate to be had about when it's best for political decisions be decided by what the public directly wants, vs when it's better for the public to elect representatives that make decisions based on a combination... (read more)

7
Will Aldred
4mo
On top of this, I imagine most involved view not fighting a nuclear war as preferable to fighting and winning. (In other words, a nuclear war is not only negative on net, but negative for everyone.)[1] I previously did some work (with/under Michael Aird) on the effects of nuclear weapons advances on nuclear risk. There’s no expert consensus I’m aware of: for many advances there are a bunch of considerations going in both directions. One example of an advance that I’m somewhat confident would decrease risk is more accurate nuclear weapons. The main reason: nukes being more accurate means that fewer nukes, and/or nukes with lower explosive yields, are needed to hit the intended target. The effect of this is fewer direct casualties; also—and more importantly for x-risk—less soot generated, hence less of a nuclear winter effect. ---------------------------------------- (Tagging the OP, @Denis, in case my comment or the post I link to is of interest.) 1. ^ This raises the obvious question, “Why fight at all?” As best I’m aware, the answer to that lies with things like false information (e.g., false alarm triggering a second strike that’s actually a first strike), and also with some artefacts of game theory (e.g., brinkmanship-gone-wrong; bargaining breakdown due to misevaluating how the opponent sees things; etc.) as well as the reality that actors don’t always behave rationally.
3
Denis
5mo
Thanks Daniel, This is all good perspective. Mostly I don't disagree with what you wrote, just a few comments: In terms of decisions, I'm not necessarily saying that the public should decide, but that the public at least should be aware and involved.  Your comment about alternative uses for the money is correct - my original point was a bit simplistic! My original post didn't talk enough about deterrence, but in a response to another comment I mentioned the key point I missed: the US will still have 900 submarine-based missiles as their deterrent. Much as I personally would love to be nuclear-weapon-free, I am not suggesting that the US could safely get rid of these, and I believe they provide an adequate deterrent.  Your insight that some of the upgrades may increase safety is a good one - I hadn't considered that.  Maybe I'm just idealistic, but I believe we need to see more efforts at more reduction of nuclear arsenals, and that this might be a time to try. I totally agree it won't be easy!  Overall, thanks for this. It is always appreciated when someone takes the time and effort to critique a post in some depth. Cheers!   

I don't have any strong views on whether this user should have been given a temporary ban vs a warning, but (unless the ban was for a comment which is now deleted or a private message, which are each possible, and feel free to correct me if so), from reading their public comments, I think it's inaccurate (or at least misleading) to describing them as "promoting violence". Specifically, they do not seem to have been advocating that anyone actually use violence, which is what I think the most natural interpretation of "promoting violence" would be. Instead, ... (read more)

Worth noting that in humans (and unlike in most other primates) status isn't primarily determined solely by dominance (e.g., control via coercion), but instead is also significantly influenced via prestige (e.g., voluntary deference due to admiration). While both dominance and prestige play a large role in determining status among humans, if anything prestige probably plays a larger role.

 

(Note – I'm not an expert in anthropology, and anyone who is can chime in, but this is my understanding given my amount of knowledge in the area.)

1
trevor1
5mo
Agreed, I only used the word "dominance games" because it seemed helpful for understandability and the wordcount. But it was inaccurate enough to be worth effort to find a better combination of words.

Note to Israelis who may be reading this: I did not upvote/downvote this post and I do not intent to vote on such posts going forward. I think you should do the same.

 

You're free to vote (or refrain from voting) how you want, but the suggestion to others feels illiberal to me in a way that I think is problematic. Would you also suggest that any Palestinians reading this post refrain from voting on it? (Or, going a step further, would you suggest Kenyan EAs refrain from voting on posts about GiveDirectly?) Personally, I think both Israeli EAs and Pales... (read more)

1
Ofer
5mo
I think the general question about voting norms w.r.t. conflicts of interest--and the more specific questions that are relevant here--are important and very hard, and I don’t think I currently have good/well-thought-through answers. My current, tentative perspective/intuition/feelings on this is something like: In the Guide to norms on the Forum (that is linked to from the about page) there is a section called Voting norm. It says: But then the guide lists things that users should not do, followed by the sentence "Other than that, you can vote using your preferred criteria." That list of things that users should not do does not seem to cover things like [casting votes that promote international legitimacy for the actions of my government in a deadly conflict that I have extreme emotions about]. Take for example this post titled "I'm a Former Israeli Officer. AMA". It seems to me reasonable to describe what the author did in that AMA as an attempt to promote 'pro-Israel propaganda'. So far the author never wrote anything on this forum (from that account) outside that AMA. That post currently has 64 karma points. Should Israelis feel welcome to strong-upvote such posts? Finally, another relevant consideration: If we ask people to (not) vote in a particular way, but then we do not enforce that request, we can end up in a situation where only some users--ones that are more scrupulous than others--adhere to the request.

Another group that naturally could be in a coalition with those 2 – parents who just want clean air for their children to breathe from a pollution perspective, unrelated to covid. (In principle, I think may ordinary adults should also want clean air for themselves to breath due to the health benefits, but in practice I expect a much stronger reaction from parents who want to protect their children's lungs.)

My problem with the post wasn't that it used subpar prose or "could be written better", it's that it uses rhetorical techniques that make actual exchange of ideas and truth-seeking harder. This isn't about "argument style points", it's about cultivating norms in the community that make it easier for us to converge on truth, even on hard topics.

The reason I didn't personally engage with the object level is I didn't feel like I had anything particularly valuable to say on the topic. I didn't avoid saying my object-level views (if he had written a similar post with a style I didn't take issue with, I wouldn't have responded at all), and I don't want other people in the community to avoid engaging with the ideas either.

3
alexherwix
6mo
Hey Daniel, as I also stated in another reply to Nick, I didn’t really mean to diminish the point you raised but to highlight that this is really more of a „meta point“ that’s only tangential to the matter of the issue outlined. My critical reaction was not meant to be against you or the point you raised but the more general community practice / trend of focusing on those points at the expense of engaging the subject matter itself, in particular, when the topic is against mainstream thinking. This I think is somewhat demonstrated by the fact that your comment is by far the most upvoted on an issue that would have far reaching implications if accepted as having some merit. Hope this makes it clearer. Don’t mean to criticize the object level of your argument, it’s just coincidental that I picked out your comment to illustrate a problematic development that I see. P.S.: There is also some irony in me posting a meta critique of a meta critique to argue for more object level engagement but that’s life I guess.

I feel like this post is doing something I really don't like, which I'd categorize as something like "instead of trying to persuade with arguments, using rhetorical tricks to define terms in such a way that the other side is stuck defending a loaded concept and has an unjustified uphill battle."

For instance:

let us be clear: hiding your beliefs, in ways that predictably leads people to believe false things, is lying. This is the case regardless of your intentions, and regardless of how it feels.

I mean, no, that's just not how the term is usually used. It's ... (read more)

-3
alexherwix
6mo
Isn’t your point a little bit pedantic here in the sense that you seem to be perfectly able to understand the key point the post was trying to make, find that point somewhat objectionable or controversial, and thus point to some issues regarding „framing“ rather than really engage deeply with the key points? Of course, every post could be better written, more thoughtful, etc. but let’s be honest, we are here to make progress on important issues and not to win „argument style points.“ In particular, I find it disturbing that this technique of criticizing style of argument seems to be used quite often to discredit or not engage with „niche“ viewpoints that criticize prevailing „mainstream“ opinions in the EA community. Happened to me as well, when I was suggesting we should look more into whether there are maybe alternatives to purely for profit/closed sourced driven business models for AI ventures. Some people where bending over backwards to argue some concerns that were only tangentially related to the proposal I made (e.g., government can't be trusted and is incompetent so anything involving regulation could never ever work, etc.). Another case was a post on engagement with "post growth" concepts. There I witnessed something like a wholesale character assassination of the post growth community for whatever reasons. Not saying this happened here but I am simply trying to show a pattern of dismissal of niche viewpoints for spurious, tangential reasons without really engaging with them. Altogether, wouldn’t it be more productive to have more open minded discussions and practice more of what we preach to the normies out there ourselves (e.g., steel-manning instead of straw-manning)? Critiquing style is fine and has its place but maybe let’s do substance first and style second?

That's fair. I also don't think simply putting a post on the forum is in itself enough to constitute a group being an EA group.

1
MysteryMeat
7mo
It's not, I just seem a lot of association especially in negative news about them and they keep talking about longtermism 

I don't think that's enough to consider an org an EA org. Specifically, if that was all it took for an org to be considered an EA org, I'd worry about how it could be abused by anyone who wanted to get an EA stamp of approval (which might have been what happened here – note that post is the founders' only post on the forum).

3
Jeff Kaufman
7mo
Maybe I'm being too nitpicky, but I think "EA org" is usually used in a stronger sense than "EA group"? I interpret the latter as more like "a group of EAs", at which point I think we're arguing whether pronatalist.org folks count as EAs?

[Just commenting on the part you copied]

Feels way too overconfident. Would the cultures diverge due to communication constraints? Seems likely, though also I could imagine pathways by which it wouldn't happen significantly, such as if a singleton was already reached.

Would technological development diverge significantly, conditional on the above? Not necessarily, imho. If we don't have a self-sufficient colony on Mars before we reach "technological maturity" (e.g., with APM and ASI), then presumably no (tech would hardly progress further at all, then).

Would... (read more)

1
flandry19
7mo
Maybe there is a simpler way to state the idea:. * 1; would the two (planetary) cultures diverge?   * (yes, for a variety of easy reasons). * 2; would this divergence become more significant over time?  * (yes, as at least some of any differences will inherently be amplified by multiple factors on multiple levels of multiple types of process for multiple reasons over multiple hundreds to thousands of years, and that differences in any one planetary cultural/functional aspect tend to create and become entangled with differences in multiple other cultural/functional aspects). * 3; would the degree of divergence, over time, eventually become significant -- ie, in the sense that it results in some sort of 1st strike game-theory dynamic?  * (yes, insofar as cultural development differences cannot not also be fully entangled with technological developmental differences). So then the question becomes: "is it even possible to maybe somehow constrain any or all of these three process factors to at least the minimum degree necessary so as to adequately prevent that factor, and thus of the overall sequence, from occurring?".   In regards to this last question, after a lot of varied simplifications, it becomes eventually and finally equivalent to asking: "can any type of inter-planetary linear causative process (which is itself constrained by speed-of-light latency limits) ever fully constrain (to at least the minimum degree necessary) all types of non-linear local (ie; intra-planetary) causative process?".   And the answer to this last question is simply "no", for basic principled reasons.
1
Remmelt
7mo
I can see how the “for sure” makes it look overconfident. Suggest reading the linked-to post. That addresses most of your questions. As to your idea of having some artificial super-intelligent singleton lead to some kind of alignment between or technological maturity of both planetary cultures, if that’s what you meant, please see here: https://www.lesswrong.com/posts/xp6n2MG5vQkPpFEBH/the-control-problem-unsolved-or-unsolvable

I was also surprised by how highly the EMH post was received, for a completely different reason – the fact that markets aren't expecting AGI in the next few decades seems unbelievably obvious, even before we look at interest rates. If markets were expecting AGI, AI stocks would presumably be (much more, at least compared to non-AI stocks) to the moon than they are now, and market analysts would presumably (at least occasionally) cite the possibility of AGI as the reason why. But we weren't seeing any of that, and we already knew from just general observati... (read more)

I think it's important to verify theories that seem obvious by thinking about precise predictions the theories make. The AI and EMH post attempts to analyze precise predictions made by the theory that "the market doesn't expect TAI soon", and for that reason I think the post makes a valuable contribution.

That said, it is still unclear to me whether interest rates will actually rise as investors realize the potential for TAI. If news of TAI causes investors to become more optimistic about investment, potentially because of the promise of higher lifespans, o... (read more)

Did Eric Drexler not describe ideas like this in Engines of Creation? Either way, I would guess that Drexler has thought of similar ideas before (sans the phrase "diamondoid bacteria") and has also likely communicated these ideas to Eliezer (albeit perhaps in an informal context). Though it's also possible Eliezer came up with it independently, as it seems like a relatively natural idea to consider once you already assume diamondoid mechanosynthesis creating atomically-precise nanobots.

I don't think this is a fair representation of what happened. The only thing that Eliezer appears to have "made up" is the name "diamondoid bacteria" – the idea of diamondoid mechanosynthesis doesn't come from him, nor does the idea of applying diamondoid mechanosynthesis to building nanorobots, nor that bacteria provide a proof-of-concept for the possibility of nanorobots, nor that artificial nanorobots may share some resemblance to bacteria. Eliezer also doesn't claim to have come up with any of these ideas. You can debate the merits of any of these idea... (read more)

4
Stuart Buck
7mo
Fair point, and I rephrased to be more clear on what I meant to say--that the scenario here is mostly science fiction (it's not as if GPT5 is turned on, diamondoid bacteria appear out of nowhere, and we all drop dead). 
4
Muireall
7mo
I do think "diamondoid bacteria, that replicate with solar power and atmospheric CHON" from List of Lethalities is original to Eliezer. He's previously cited Nanomedicine in this context, but the parts published online so far don't describe self-replicating systems. Edit: This is wrong—see Lumpyproletariat below.

Granted, in principle you could also have a situation where they're less cautious than management but more cautious than policymakers and it winds up being net positive, though I think that situation is pretty unlikely. Agree the consideration you raised is worth paying attention to.

I think there are a few reasons why this is very unlikely to work in practice, at least in society today (maybe it would work if most people were longtermists of some sort):

  • Focusing on such far-off issues would be seen as "silly," and would thereby hurt the political careers of those who focused on them
  • Political inertia is generally strong, meaning that, by default, most things in politics don't happen, even if there's slightly more support than opposition. Here, you'd be shifting the policy proposal to one that wouldn't affect the lives of constituents, a
... (read more)
1
FCCC
7mo
Interesting points. 100 years is unnecessarily long, it just simplified some of my arguments (every politician being dead, for instance). If it were, say, 50 years, the arguments still roughly hold. Then it becomes something that people do for their children, and not something for “the unborn children of my unborn children” which doesn't seem real to people (even though it is). I think this probably solves the silliness issue, and the constituency issue. But I also think it might seem silly because no one has done it before. In December, putting a tree in your house and covering it with lights doesn't seem silly because it's something that everyone does. The first successful instance of this will be much harder than every other attempt. Politicians who only advocated for these policies would seem silly, because current issues also matter. So I'm not suggesting that, just that it plays a part in their overall policy portfolio. And normally when policies are passed, several go through at once. If no one else cares about what happens in 50 years time, they have a chance of slipping by. So my question is, why not try it on something uncontroversial that has a short-term sticking point? What do you gain from not seeing if this works?

A union for AI workers such as data scientists, hardware and software engineers could organise labor to counterbalance the influence of shareholders or political masters.

 

It's not obvious to me that AI workers would want a more cautious approach than AI shareholders, AI bosses, and so on. Whether or not this would be the case seems to me to be the main crux behind whether this would be net positive or net harmful.

1
dEAsign
7mo
I had explicitly considered this in drafting and whether to state that crux. If so, it could be an empirical question of whether there is greater support from the workers or management, or receptiveness to change. I did not because I now think the question is not whether AI workers are more cautious than AI shareholders, but whether AI firms where unionised AI workers negotiate with AI shareholders would be more cautious. To answer that question, I think so Edit: to summarise, the question is not whether unions (in isolation) would be more cautious, but whether an system of management (and policymakers) bargaining with a union would be more cautious - and yes it probably would
5
Larks
7mo
Even if they were slightly more cautious than management, if they were less cautious than policymakers it could still be net negative due to unions' lobbying abilities.

Do people actually think that Google+OpenAI+Anthropic (for sake of argument) are the US? Do they think the US government/military can/will appropriate those staff/artefacts/resources at some point?

 

I'm pretty sure what most (educated) people think is they are part of the US (in the sense that they are "US entities", among other things), that they will pay taxes in the US, will hire more people in the US than China (at least relative to if they were Chinese entities), will create other economic and technological spillover effects in greater amount in t... (read more)

In my mind there are 2 main differences:

  1. Economic degrowth is undesirable, vs pausing AI is at least arguably desirable – climate change is very unlikely to lead to a literal existential catastrophe, "business as usual" tech improvements and policy changes (i.e., without overthrowing capitalism) will likely lead to a clean energy transition as is, economic degrowth would probably kill many more people than it would save, etc. Meanwhile, AI presents large existential risk in my mind, and I think a pause would probably lower this risk by a non-negligible amou
... (read more)
1
mikbp
7mo
The issue is not only climate change, here. We are in dangerous territory for most of the planetary boundaries. One of the points is that EAs do not seem to engage with large close-to-existential risks in the minds of degrowthers and the like. It is true that they do not have fleshed out to what extent their fears are existential, but this is because they are large enough for worrying them. See "Is this risk actually existential?" may be less important than we think. I like your second point. But still, even if it is less politically feasible, as you say, if the risk is large enough EA should be in favour of degrowth. My point is that very little effort has been done to address this if.

So I notice Fox ranks pretty low on that list, but if you click through to the link, they rank very high among Republicans (second to only the weather channel). Fox definitely uses rhetoric like that. After Fox (among Republicans) are Newsman and OAN, which similarly both use rhetoric like that. (And FWIW, I also wouldn't be super surprised to see somewhat similar rhetoric from WSJ or Forbes, though probably said less bluntly.)

I'd also note that the left-leaning media uses somewhat similar rhetoric for conservative issues that are supported by large groups (e.g., Trumpism in general, climate denialism, etc), so it's not just a one-directional phenomena.

7
James Herbert
7mo
Yes, I noticed that. Certain news organisations, which are trusted by an important subsection of the US population, often characterise progressive movements as uninformed mobs. That is clear. But if you define 'reputable' as 'those organisations most trusted by the general public', which seems like a reasonable definition, then, based on the YouGov analysis, Fox et al. is not reputable. But then maybe YouGov's method is flawed? That's plausible. But we've fallen into a bit of a digression here. As I see it, there are four cruxes: 1. Does a focus on the inside game make us vulnerable to the criticism that we're a part of a conspiracy?  1. For me, yes. 2. Does this have the potential to undermine our efforts? 1. For me, yes. 3. If we reallocate (to some degree) towards the outside game in an effort to hedge against this risk, are we likely to be labelled an uninformed mob, and thus undermine our efforts? 1. For me, no, not anytime soon (although, as you state, organisations such as Fox will do this before organisations such as PBS, and Fox is trusted by an important subsection of the US population). 4. Is it unquestionably OK to try to guide society without broader societal participation? 1. For me, no.    I think our biggest disagreement is with 3. I think it's possible to undermine our efforts by acting in such a way that organisations such as Fox characterise us as an uninformed mob. However, I think we're a long, long way from that happening. You seem to think we're much closer, is that correct? Could you explain why? I don't know where you stand on 4.  P.S. I'm enjoying this discussion, thanks for taking the time!

I don't recall seeing many reputable publications label large-scale progressive movements (e.g., BLM, Extinction Rebellion, or #MeToo) as "uninformed mobs"

 

So progressive causes will generally be portrayed positively by progressive-leaning media, but conservative-leaning media, meanwhile, has definitely portrayed all those movements as ~mobs (especially for BLM and Extinction Rebellion), and predecessor movements, such as for Civli Rights, were likewise often portrayed as mobs by detractors. Now, maybe you don't personally find conservative media to b... (read more)

1
James Herbert
7mo
For sure progressive publications will be more positive, and I don't think conservative media ≠ reputable.  When I say "reputable publications" I am referring to the organisations at the top of this list of the most trusted news outlets in the US. My impression is that very few of these regularly characterise the aforementioned movements as "uninformed mobs". 
2
David Mathers
7mo
I suspect the ideology of Politico and most EAs are not that different (i.e. technocratic liberal centrism). 
5
Shakeel Hashim
7mo
Yeah, the phrase "woke mob" (and similar) is extremely common in conservative media!

the piece has an underlying narrative of a covert group exercising undue influence over the government

My honest perspective is if you're an lone individual affecting policy, detractors will call you a wannabe-tyrant, if you're a small group, they'll call you a conspiracy, and if you're a large group, they'll call you an uninformed mob. Regardless, your political opponents will attempt to paint your efforts as illegitimate, and while certain lines of criticism may be more effective than others, I wouldn't expect scrutiny to simply focus on the substance eit... (read more)

6
James Herbert
7mo
Maybe I'm in a bubble, but I don't recall seeing many reputable publications label large-scale progressive movements (e.g., BLM, Extinction Rebellion, or #MeToo) as "uninformed mobs". This article from the Daily Mail is about as close as it gets, but I think I'd rather have the Daily Mail writing about a wild What We Ourselves party than Politico insinuating a conspiracy.  Ultimately, I don't think any of us know the optimal split in a social change portfolio between the outside game and the inside game, so perhaps we should adapt as the criticism comes in. If we get a few articles insinuating conspiracy, maybe we should reallocate towards the outside game, and vice versa.     And again, I know I sound like a broken record, but there's also the issue of how appropriate it is for us to try to guide society without broader participation. 

either because they see the specific interventions as a risk to their work, or because they feel policy is being influenced in a major way by people who are misguided

or because they feel it as a threat to their identity or self-image (I expect these to be even larger pain points than the two you identified)

"how well resourced scientific research institutes are and thus able to get organs that they need for research."
Hmm, are they allowed to buy organs, though? Otherwise, the fact that they're well resourced might not matter much for their access to organs.

My guess is this is mostly just a product of success, and insofar as the political system increasingly takes AI X-risk seriously, we should expect to see stuff like this from time to time. If the tables were flipped and Sunak was instead pooh-poohing AI X-risk and saying things like "the safest path forward for AI is accelerating progress as fast as we can – slowing down would be Luddism" then I wouldn't be surprised to see articles saying "How Silicon Valley accelerationists are shaping Rishi Sunak’s AI plans". Doesn't mean we should ignore the negative p... (read more)

1
James Herbert
7mo
Hmm, I agree that with influence comes increased scrutiny, and the trade-off is worth it in many cases, but I think there are various angles this scrutiny might come from, and I think this is a particularly bad one.  Why? Maybe I'm being overly sensitive but, to me, the piece has an underlying narrative of a covert group exercising undue influence over the government. If we had more of an outside game, I would expect the scrutiny to instead focus on either the substance of the issue or on the outside game actors. Either would probably be an improvement.  Furthermore, there's still the very important issue of how appropriate it is for us to try to guide society without broader societal participation.
4
Sean_o_h
7mo
+1; except that I would say we should expect to see more, and more high-profile. AI xrisk is now moving from "weird idea that some academics and oddballs buy into" to "topic which is influencing and motivating significant policy interventions", including on things that will meaningfully matter to people/groups/companies if put into action (e.g. licensing, potential restriction of open-sourcing, external oversight bodies, compute monitoring etc). The former, for a lot of people (e.g. folks in AI/CS who didn't 'buy' xrisk) was a minor annoyance. The latter is something that will concern them - either because they see the specific interventions as a risk to their work, or because they feel policy is being influenced in a major way by people who are misguided. I would think it's reasonable to anticipate more of this.

Speaking personally, I think there is a possibility of money becoming obsolete, but I also think there's a possibility of money mattering more, as (for instance) AI might allow for an easier ability to turn money into valuable labor. In my mind, it's hard to know how this all shakes out on net.

I think there are reasons for expecting the value of spending to be approximately logarithmic with total spending for many domains, and spending on research seems to fit this general pattern pretty well, so I suspect that it's prudent to generally plan to spread spen... (read more)

Annoy away – it's a good question! Of course, standard caveats to my answer apply, but there's a few caveats in particular that I want to flag:

  • It's possible that by 2028 there will be one (or more) further longtermist billionaires who really open up the spigot, significantly decreasing the value of marginal longtermist money at that time
  • It's possible that by 2028, AI would have gotten "weird" in ways that affect the value of money at that time, even if we haven't reached AGI (e.g., certain tech stocks might have skyrocketed by then, or it might be possible
... (read more)
3
porby
8mo
Thanks for breaking down details! That's very helpful. (And thanks to Lauro too!)

So I'm imagining, for instance, AGIs with some shards of caring about human ~autonomy, but also other (stronger) shards that are for caring about (say) paperclips (also this was just meant as an example). I was also thinking that this might be what "a small population in a 'zoo'" would look like – the Milky Way is small compared to the reachable universe! (Though before writing out my response, I almost wrote it as "our solar system" instead of "the Milky Way," so I was imagining a relatively expansive set within this category; I'm not sure if distorted "pet" versions of humans would qualify or not.)

2
Greg_Colbourn
8mo
Why wouldn't the stronger shards just overpower the weaker shards?

FWIW, I think specific changes here are unlikely to be cruxy for the decisions we make.

[Edited to add: I think if we could know with certainty that AGI was coming in 202X for a specific X, then that would be decision-relevant for certain decisions we'd face. But a shift of a few years for the 10% mark seems less decision relevant]

2
Greg_Colbourn
8mo
I think it's super decision-relevant if the shift leads you to 10%(+) in 2023 or 2024. Basically I think we can no longer rely on having enough time for alignment research to bear fruit, so we should be shifting the bulk of resources toward directly buying more time (i.e. pushing for a global moratorium on AGI).

Another reason that the higher funding bar is likely increasing delays – borderline decisions are higher stakes, as we're deciding between higher EV grants. It seems to me like this is leading to more deliberation per grant, for instance.

Presumably this will differ a fair bit for different members of the LTFF, but speaking personally, my p(doom) is around 30%,[1] and my median timelines are ~15 years (though with high uncertainty). I haven't thought as much about 10% timelines, but it would be some single-digit number of years.

  1. ^

    Though a large chunk of the remainder includes outcomes that are much "better" than today but which are also very suboptimal – e.g., due to "good-enough" alignment + ~shard theory + etc, AI turns most of the reachable universe into paperclips but leaves humans

... (read more)
1
Greg_Colbourn
8mo
Please keep this in mind in your grantmaking.
2
Greg_Colbourn
8mo
Interesting that you give significant weight to non-extinction existential catastrophes (such as the AI leaving us the Milky Way). By what mechanism would that happen? Naively, all or (especially) nothing seem much more likely. It doesn't seem like we'd have much bargaining power with not perfectly-aligned ASI. If it's something analogous to us preserving other species, then I'm not optimistic that we'd get anything close to a flourishing civilisation confined to one galaxy. A small population in a "zoo"; or grossly distorted "pet" versions of humans; or merely being kept, overwhelmingly inactive, in digital storage, seem more likely.

Personally, I'd like to see more work being done to make it easier for people to get into AI alignment without becoming involved in EA or the rationality community. I think there are lots of researchers, particularly in academia, who would potentially work on alignment but who for one reason or another either get rubbed the wrong way by EA/rationality or just don't vibe with it. And I think we're missing out on a lot of these people's contributions.

To be clear, I personally think EA and rationality are great, and I hope EA/rationality continue to be on-ram... (read more)

Hmm, I think most of these grants were made when EA had much more money

I think that's true, but I also notice that I tend to vote lower on bio-related grants than do others on the fund, so I suspect there's still somewhat of a strategic difference of opinion between me and the fund average on that point.

2
Linch
8mo
Yeah I tend to have higher uncertainty/a flatter prior about the EV of different things compared to many folks in a similar position; it's also possible I haven't sufficiently calibrated to the new funding environment.

To add to what Linch said, anecdotally, it seems like there's more disagreements when the path to impact of the grant is less direct (as opposed to, say, AI technical research), such as with certain types of governance work, outreach, or forecasting. 

In my personal opinion, the LTFF has historically funded too many bio-related grants and hasn't sufficiently triaged in favor of AI-related work.

2
calebp
8mo
Hmm, I think most of these grants were made when EA had much more money (pre-FTX crash), which made funding bio work much more reasonable than funding bio work rn, by my lights. I think on the current margin, we probably should fund stellar bio work. Also, I want to note that talking negatively about specific applications might be seen as "punching down" or make applying to the LTFF higher risk than an applicant could have reasonably thought so fund managers may be unwilling to give concrete answers here.

[personal observations, could be off]

I want to add that the number tends to be higher for grants that are closer to the funding threshold or where the grant is a "bigger deal" to get right (eg larger, more potential for both upside and downside) than for those that are more obvious yes/no or where getting the decision wrong seems lower cost.

I think the idea of having AI safety conferences makes sense, but I think it would be a pretty bad idea for these conferences to be subsidized by industry. Insofar as we want to work with industry on AI safety related stuff, I think there's a lot of other stuff ahead of conferences that both: a) industry would be more excited about subsidizing, and b) I'd worry less about the COI leading to bad effects. (For instance, industry subsidies for mechanistic interpretability research.)

IDK 160% annualized sounds a bit implausible. Surely in that world someone would be acting differently (e.g. recurring donors would roll some budget forward or take out a loan)?

Presumably the first step towards someone acting differently would be the LTFF/EAIF (perhaps somewhat desperately) alerting potential donors about the situation, which is exactly what's happening now, with this post and a few others that have recently been posted.

I would be curious to hear from someone on the recipient side who would genuinely prefer $10k in hand to $14k in three mo

... (read more)

[Speaking in my personal capacity, not on behalf of the LTFF] I am also strongly in favor of there being an AI safety specific fund, but this is mostly unrelated to recent negative press for longtermism. My reasons for support are (primarily): a) people who aren't EAs (and might not even know about longtermism) are starting to care a lot more about AI safety, and many of them might donate to such a fund; and b) EAs (who may or may not be longtermists) may prioritize AI safety over other longtermist causes (eg biosafety), so an AI safety specific fund may fit better with their preferences.

3
quinn
8mo
it's true that the correlation between framings of the problem socially overlapping with longtermism and longtermism could be made spurious! there's a lot of bells and whistles on longtermism that don't need to be there, especially for the 99% of what needs to be done in which fingerprints never come up. 

Hmm, I was thinking not that the benefits of the first one would be higher, but that people will more likely underestimate the benefits before they go to the first one.

5
Nathan Young
8mo
Seems fixable in the same way, right?

Thinking this through for a minute, it seems like the obvious answer would be: let people choose either (say) $500 ticket price or $1,000, and also have a note saying "If your annual income is above XYZ, we would like to ask you to choose $1,000, though this will be done on an honor-code basis. If your income is below XYZ, feel free to choose either option" (or something like that)

9
Eli_Nathan
8mo
Yep — this is basically my preferred option right now for what we should do (transparent options/honor-code basis).

on the other end of the spectrum we may ask some people to pay for their ticket price in its entirety

 

I'm wondering how this is going to work logistically. Will CEA ask everyone to report their income, and anyone who isn't comfortable reporting it is assumed to be rich enough that they have to pay the entire cost? This outcome feels like it would be invasive. Is instead CEA going to just ask those who are publicly known to be well off to pay the entire cost? This would presumably only raise a small amount of money, and I'd worry that it would make those people who it applied to feel like they were somewhat arbitrarily being nickled-and-dimed and rub them the wrong way.

Thinking this through for a minute, it seems like the obvious answer would be: let people choose either (say) $500 ticket price or $1,000, and also have a note saying "If your annual income is above XYZ, we would like to ask you to choose $1,000, though this will be done on an honor-code basis. If your income is below XYZ, feel free to choose either option" (or something like that)

Renting out nearby restaurants seems like plausibly a good idea, though, a) that might also be quite expensive, so I'm not sure we'd actually save on costs, and b) the logistical overhead on figuring that out could be large.

5
Rebecca
8mo
I took ‘reserve’ to mean ‘book reservations at’, which is usually free, though may require a deposit at certain size bookings?

We’re still working out the details, but we also expect to raise default ticket prices. That is, instead of people paying $200 for a ticket that costs us $1500, they’ll be paying maybe $500 for a ticket that costs us $1000.

I’m concerned this change will contribute to making EA more insular by raising the costs of becoming engaged with the community. $200 instead of free means people only go if they’re serious, but $500 feels like a real chunk of money, and may be particularly hard to justify for people who aren't already highly engaged.

One possible remedy here would be to also give discounted tickets to first-timers.

6
Nathan Young
8mo
Feels like there could be some way to offer subsidy here. But it does seem good for the costs of things to be properly subsidised. If the benefit of one's first EAG is higher then maybe there could be a first EAG subsidy.

I share your concern, but I think it would be more than offset by the competition with EAGx. My understanding from the dashboard, and from personal experience, is that EAGx has roughly the same value as EAG for first-timers (and many/most non first-timers) while being ~3x more cost-efficient. (If you went to both, would you rather pay $2k to go to an EAG or $600 to go to an EAGx as a first timer?)

I think this is good also because EAGx are mostly organized by different community members and national groups staff, so different groups could try different stra... (read more)

For example, with $2k, I expect I could hire a pub in central London for an evening (or maybe a whole day), with perhaps around 100 people attending. So that's $20 per person, or 1% of the cost of EAG. Would they get as much benefit from attending my event as attending EAG? No, but I'd bet they'd get more than 1% of the benefit. 

 

Worth noting these aren't necessarily mutually exclusive. It's possible both running EAGs and running these smaller events are above current funding bars.

Finally, I think that this surely tells you something about the participation of women in the field.

It presumably tells you something about the participation of women in the field, but it's not clear exactly what. For instance, my honest reaction to this list is that several of the people on it have a habit of churning out lots of papers of mediocre quality – it could be that this trait is more common among men in the field than among women in the field.

9
FJehn
8mo
This is just another data point that the existential risk field (like most EA adjacent communities) has a problem when it comes to gender representation. It fits really well with other evidence we have. See, for example Gideon's comment under this post here: https://forum.effectivealtruism.org/posts/QA9qefK7CbzBfRczY/the-25-researchers-who-have-published-the-largest-number-of?commentId=vt36xGasCctMecwgi While on the other hand there seems to be no evidence for your "men just publish more, but worse papers" hypothesis. 

In principle, the zero point is supposed to signify equivalent to burning the money, and negative signifies net-negative EV (neglecting financial cost of the grant). In practice, speaking personally, if I weakly think a grant is a bit net negative, but it's not particularly worrying nor something I feel confident about, I usually give it a score that's well below the funding threshold, but still positive (so that if other grantmakers are more confidently in favor of the grant, they can more likely outvote me here). If I were to confidently believe that a grant was of zero net value, I would give it a vote of zero.

6
Linch
8mo
I personally give a negative value and (when I have low certainty) flag that I'm willing to change/delete my votes if other people feel strongly, so as to not unduly tank the results. I think LTFF briefly experimented with weighted voting in the past but we've moved against it (I forgot why). 
Load more