All of D0TheMath's Comments + Replies

D0TheMath
2
0
0
80% disagree

Giving meaningful advance notice of a post that is critical of an EA person or organization should be

I want to lower frictions to criticism as much as possible, because I think criticism is very good. 

The main argument against I’ve seen is that an org won’t be able to meaningfully respond due to the pace things move on the forum. This sounds like a UI issue. No need to create a harmful community norm.

e.g. blocking oil depots seems to have comparable effects to throwing soup) although the analysis here is still to be finalised.

Interested to hear more, but I would not expect blocking oil depots to be effective either. Why would it? It may be related but its not so compelling to the average observer. Compare with the example I used, of sit-ins, which are eminently compelling. If you compare ineffective strategies with ineffective strategies you will pick up noise and low order effects.

Specifically, I think there are some random factors around luck, p

... (read more)
6
JamesÖz 🔸
I mean there are probably a bunch of protests that you don't think make sense that had positive impacts (see some here) but specifically I would point to Extinction Rebellion blocking roads about climate or Just Stop Oil doing something similar. I may be being obtuse but are you implying that Extinction Rebellion was a fluke? As if so, I don't agree with that! My view is that the founders had a pretty good design and plan, based on historical context and research, and with enough attempts, they managed to start something at the right time. 

I think this post is a bit too humble. The social movements that worked had reasons they worked. The structure of the problem, the allies they were likely to find, and the enemies they were likely to have resulted in the particular strategies they chose working. Similarly for the social movements which failed. These are reasons you can & should learn from, and your ability to look at those reasons is the largest order effect here.

Most movements don’t, they do what you describe, choose their favorite movement, and cargo-cult their way to failure.

The mos... (read more)

3
David Mathers🔸
"Throwing soup at van gogh paintings have none of these attributes, so it is counter-productive." What's the evidence it was counterproductive? 

The social movements that worked had reasons they worked. The structure of the problem, the allies they were likely to find, and the enemies they were likely to have resulted in the particular strategies they chose working. Similarly for the social movements which failed. These are reasons you can & should learn from, and your ability to look at those reasons is the largest order effect here.

I take the point about being too humble but I'm not sure I fully agree with this bit above! Specifically, I think there are some random factors around luck, person... (read more)

I will note that my comment made no reference to who is “more altruistic”. I don’t know what that term means personally, and I’d rather not get into a semantics argument.

If you give the definition you have in mind, then we can argue over whether its smart to advocate that someone ought to be more altruistic in various situations, and whether it gets at intuitive notions of credit assignment.

I will also note that given the situation, its not clear to me Anna’s proper counterfactual here isn’t making $1M and getting nice marketable skills, since she and Beli... (read more)

I think the right stance here is a question of “should EA be praising such people or get annoyed they’re not giving up more if it wants to keep a sufficient filter for who it calls true believers”, and the answer there is obviously both groups are great & true believers and it seems dumb to get annoyed at either.

The 10% number was notably chosen for these practical reasons (there is nothing magic about that number), and to back-justify that decision with bad moral philosophy about “discharge of moral duty” is absurd.

I'm not going to defend my whole view here, but I want to give a though experiment as to why I don't think that "shadow donations"—the delta between what you could earn if you were income-maximizing, and what you're actually earning in your direct work job—are a great measure for the purposes of practical philosophy (though I agree they're both a relevant consideration and a genuine sacrifice).

Imagine two twins, Anna and Belinda. Both have just graduated with identical grades, skills, degrees, etc. Anna goes directly from college work on AI safety at Safet... (read more)

5
Jason
I didn't read Cullen's comment as about 10%, and I think almost all of us would agree that this isn't a magic number. Most would probably agree that it is too demanding for some and not demanding enough for others. I also don't see anything in Cullen's response about whether we should throw shade at people for not being generous enough or label them as not "true believers." Rather, Cullen commented on "donation expectations" grounded in "a practical moral philosophy." They wrote about measuring an "obligation to donate."  You may think that's "bad moral philosophy," but there's no evidence of it being a post hoc rationalization of a 10% or other community giving norm here.

Its relatively common (I don't know about rates) for such people to take pay-cuts rather than directly donate that percentage. I know some who could be making millions a year who are actually making hundreds. It makes sense they don't feel the need to donate anything additional on top of that!

It's not clear to me whether you're talking about people who (a) do a voluntary salary sacrifice while working at an EA org, or (b) people who could have earned much more in industry but moved to a nonprofit so now earn much less than their hypothetical maximum earning potential.

In case (a), yes, their salary sacrifice should count towards their real donations.

But I think a practical moral philosophy wherein donation expectations are based on your actual material resources (and constraints), not your theoretical maximum earning potential, seems more justif... (read more)

There's already been much critique of your argument here, but I will just say that by the "level of influence" metric, Daniela shoots it out of the park compared to Donald Trump. I think it is entirely uncontroversial and perhaps an understatement to claim the world as a whole and EA in particular has a right to know & discuss pretty much every fact about the personal, professional, social, and philosophical lives of the group of people who, by their own admission, are literally creating God. And are likely to be elevated to a permanent place of power ... (read more)

When you start talking about silicon valley in particular, you start getting confounders like AI, which has a high chance of killing everyone. But if we condition on that going well or assume the relevant people won't be working on that, then yes that does seem like a useful activity, though note that silicon valley activities are not very neglected, and you can certainly do better than them by pushing EA money (not necessarily people[1]) into the research areas which are more prone to market failures or are otherwise too "weird" for others to believe in.

O... (read more)

This seems pretty unlikely to me tbh, people are just less productive in the developing world than the developed world, and its much easier to do stuff--including do good--when you have functioning institutions, surrounded by competent people, connections & support structures, etc etc.

That's not to say sending people to the developed world is bad. Note that you can get lots of the benefits of living in a developed country by simply having the right to live in a developed country, or having your support structure or legal system or credentials based in ... (read more)

1
WillieG
If I can pull this thread...you previously wrote, "Maybe I have more faith in the market here than you do, but I do think that technical & scientific & economic advancement do in fact have a tendency to not only make everywhere better, but permanently so." In your opinion, is this an argument in favor of prioritizing pushing both EA money and people into communities like SV that are high-impact in terms of technological advancement?

I think it seems pretty evil & infantilizing to force people to stay in their home country because you think they’ll do more good there. The most you should do is argue they’ll do more good in their home country than a western country, then leave it up to them to decide.

I will furthermore claim that if you find yourself disagreeing, you should live in the lowest quality of living country you can find, since clearly that is the best place to work in your own view.

Maybe I have more faith in the market here than you do, but I do think that technical &... (read more)

1
WillieG
From my rationality-married-with-emotion-and-values human brain, I agree with you. Evil indeed.  That said, I can see a dystopian future where Hyper-Rationalist Bot makes all decisions, and decides that "the greatest good for the greatest number" is best served by keeping human capital in the developing world, using the EA logic that capital in the developing world creates more utility than the same capital in the developed world. (In fact, HRB thinks we should send capable people from the developed world to the developing world to accelerate utility growth even more.)

This is not a discussion about anyone forcing anyone to do anything (noone has suggested that), but the original question was about the degree we should potentially fund and support the best workers in our orgs to emigrate. This is a hugely important question, because from experience in Uganda with enough time and resources I could probably help almost any high level qualified and capable person to emigrate but is that really the best thing for me do?

As things stand every country in the world has huge restrictions on emigration, which does often "force" pe... (read more)

This is not the central threat, but if you did want a mechanism, I recommend looking into the krebs cycle.

1
Caruso
Interesting suggestion. Thank you!

I do think this is correct to an extent, but also that much moral progress has been made by reflecting on our moral inconsistencies, and smoothing them out. I at least value fairness, which is a complicated concept, but also is actively repulsed by the idea that those closer to me should weigh more in society's moral calculations. Other values I have, like family, convenience, selfish hedonism, friendship, etc are at odds with this fairness value in many circumstances.

But I think its still useful to connect the drowning child argument with the parts of me ... (read more)

3
David T
I think there's plenty of place for argument in moral reflection, but part of that argument includes accepting that things aren't necessarily "obvious" or "irrefutable" because they're intuitively appealing. Personally I think the drowning child experiment is pretty useful as thought experiments go, but human morality in practice is so complicated that even Peter Singer doesn't act consistently with it, and I don't think it's because he doesn't care.

Even if most aren't receptive to the argument, the argument may still be correct. In which case its still valuable to argue for and write about.

I agree with you about the bad argumentation tactics of Situational Awareness, but not about the object level. That is, I think Leopold's arguments are both bad, and false. I'd be interested in talking more about why they're false, and I'm also curious about why you think they're true.

4
terraform
I think some were false. For example, I don't get the stuff about mini-drones undermining nuclear deterrence, as size will constrain your batteries enough that you won't be able to do much of anything useful. Maybe I'm missing something (modulo nanotech).  I think it's very plausible scaling holds up, it's plausible AGI becomes a natsec matter, it's plausible it will affect nuclear deterrence (via other means), for example. What do you disagree with?

Otherwise I think that you are in part spending 80k's reputation in endorsing these organizations

Agree on this. For a long time I've had a very low opinion of 80k's epistemics[1] (both podcast, and website), and having orgs like OpenAI and Meta on there was a big contributing factor[2].


  1. In particular that they try to both present as an authoritative source on strategic matters concerning job selection, while not doing the necessary homework to actually claim such status & using articles (and parts of articles) that empirically nobody reads &

... (read more)

The second two points don’t seem obviously correct to me.

First, the US already has a significant amount of food security, so its unclear whether cultivated meats would actually add much.

Second, If cultivated meats destroy the animal agriculture industry, this could very easily lead to a net loss of jobs in the economy.

rationalist community kind of leans right wing on average

Seems false. It leans right compared to the extreme left wing, but right compared to the general population? No. Its too libertarian for that. I bet rightists would also say it leans left, and centrists would say its too extreme. Overall, I think its just classically libertarian.

There's much thought in finance about this. Some general books are:

  1. Options, Futures, and Other Derivatives

  2. Principles of Corporate Finance

And more particularly, The Black Swan: The Impact of the Highly Improbable, along with other stuff by Taleb (this is kind-of his whole thing).

The same standards applied to anything else: A decent track record of such experiments succeeding, and/or well-supported argument based on (in this case) sound economics.

So far the track-record is heavily against. Indeed, many of the worst calamities in history took the form of "revolution".

In lieu of that track record, you need one hell of an argument to explain why your plan is better, which at the minimum likely requires basing it on sound economics (which, if you want particular pointers, mostly means Chicago school, but sufficiently good complexity economics would also be fine).

3
Friso
I don't feel like you are taking my question head-on. I'm asking you to envision something that could convince you about systems other than capitalism. Just saying successful experiments and well-based arguments feels like you are evading my question a little. Especially because it is not clear how such an experiment could succeed (as in, what result are you're looking for?), or what kind of arguments would be convincing. What if a more just world would require that we end the state of global inequality, and that simply requires lowering the standard of living in much of the Global North? Or if justice requires that we end the extraction of much of the planet's resources? I assume an experiment that tells you some people might need to become much worse off for justice to occur will not convince you.  I also don't understand why that would mostly mean Chicago school. To change your mind on capitalism, you need a specific school to change their mind first? And not some school, but one that was 1) explicitly set up to defend laissez-faire capitalism, 2) largely relies on simplifying assumptions we simply know to be false (rational choice, perfect markets, etc.), 3) is incredibly controversial even for many mainstream economists (e.g. even Paul Krugman called it "the product of a Dark Age of macroeconomics in which hard-won knowledge has been forgotten") and for environmentalists especially?

It makes me sad that I automatically double timing estimates from EA orgs, treat that as the absolute minimum time something could take, and am often still disappointed.

I definitely strongly agree with this. I do think its slowly, ever so slowly getting better though.

More broadly I think Anthropic, like many, hasn’t come to final views on these topics and is working on developing views, probably with more information and talent than most alternatives by virtue of being a well-funded company.

It would be remiss to not also mention the large conflict of interest analysts at Anthropic have when developing these views.

I do dislike this feature of EA, but I don't think the solution is to transition away from a one-grant-at-a-time model. Probably better would be to have exit-coaches to help EAs find a new career outside EA, if they built up a bunch of skills because funding sources or other generally EA-endorsed sources told them they would give them money if they used such skills for the benefit of the universe.

What talents do you think aren't applicable outside the EAsphere?

(Edit: I do also note that I believe 80k should be taken a lot less seriously than they present themselves, and that most EAs take them. Their incorrect claims of EA being talent constrained one of many reasons I distrust them)

Arepo
12
4
0
1

I'm not sure what the solution is - more experimentation seems generally like a good idea, but EA fundmakers seem quite conservative in the way they operate, at least once they've locked in a modus operandi.

For what it's worth, my instinct is to try a model with more 'grantmakers' who take a more active, product-managery/ownery role, where they make  fewer grants, but the grants are more like contracts of employment, such that the grantmakers take some responsibility for the ultimate output (and can terminate a contract like a normal employer if the '... (read more)

Recommendation: A collection of paradoxes dealing with Utilitarianism. This seems to me to be what you wrote, and would have had me come to the post with more of a “ooo! Fun philosophy discussion” rather than “well, thats a very strong claim… oh look at that, all so called inconsistencies and irrationalities either deal with weird infinite ethics stuff or are things I can’t understand. Time to be annoyed about how the headline is poorly argued for.” the latter experience is not useful or fun, the former nice depending on the day & company.

-1
Michael St Jules 🔸
Thanks for the feedback! I think your general point can still stand, but I do want to point out that the results here don't depend on actual infinities (infinite universe, infinitely long lives, infinite value), which is the domain of infinite ethics. We only need infinitely many possible outcomes and unbounded but finite value. My impression is that this is a less exotic/controversial domain (although I think an infinite universe shouldn't be controversial, and I'd guess our universe is infinite with probability >80%). Furthermore, impossibility results in infinite ethics are problematic for everyone with impartial intuitions, but the results here seem more problematic for utilitarianism in particular. You can keep Impartiality and Pareto and/or Separability in deterministic + unbounded but finite cases here, but when extending to uncertain cases, you wouldn't end up with utilitarianism, or you'd undermine utilitarianism in doing so. You can't extend both Impartiality and Pareto to infinite cases (allowing arbitrary bijections or swapping infinitely many people in Impartiality), and this is a problem for everyone sympathetic to both principles, not just utilitarians.

My understanding of history says that usually letting militaries have such power, or initiating violent overthrow via any other means to launch an internal rebellion leads to bad results. Examples include the French, Russian, and English revolutions. Counterexamples possibly include the American Revolution, though notably I struggle to point to anything concrete that would have been different about the world had America had a peaceful break off like Canada later on did.

Do you know of counter-examples, maybe relating to poor developing nations which after the rebellion became rich developed nations?

2
Alimi
Thanks @ DOTheMath for sharing your understanding of the history of revolutions and military coups. You mentioned "revolution" which is categorically different from " military coups". Since revolutions usually had the support of both the middle class and the lower class including some portions of the elites, definitely they produced systemic changes and improved governance. Remember, most of the revolutions you mentioned succeeded because they were no or little vested interested to thwart the outcome of such revolutions as we see with the subversive meddling of an international syndicates in African contexts. Burkina Faso under Sankara was on it way to development, but he was coldly assassinated because it would be a bad precedent and insult for pro-democrats in the West to have a military regime offer better alternative for development. In the history of coups in Africa, 60 % of them had been orchestrated by foreign powers ( France in Central African Republic, Chad, Togo, DRC , Gabon while the US seems to be discreet and diplomatic in its support for military takeover see the recent coup in Never Republic). The point about this failure is due to the fact that vested interests would start threatening to sanctions such regimes, thus not allowing them achieve their mission: offer viable alternative. Conclusion , military takeovers are not intrinsically bad as evidenced in the support their received in some cases where what is now called a palace coup is orchestrated to remove a democratically elected president by the support and approval of Western governments. What do you say of the ousting of Muhammad Morsi of Egypt? Overall, the last time I checked, I saw no real democracy anywhere in the world. Let's take the United States of America for instance. How many political parties exist there? How many of them determine the federal policies? How many democracies embrace multiparty system? Majorly, we are having weak coalitions hiding behind single party system draining d

I think Habryka has mentioned that Lightcone could withstand a defamation suit, so there’s not a high chance of financially ruining him. I am tentatively in agreement otherwise though.

True! But for the record I definitely don't have remotely enough personal wealth to cover such a suit. So if libel suits are permissible then you may only hear about credible accusations from people on teams who are willing to back the financial cost, the number of which in my estimation is currently close to 1.

Added: I don't mean to be more pessimistic than is accurate. I am genuinely uncertain to what extent people will have my back if a lawsuit comes up (Manifold has it at 13%), and my uncertainty range does include "actually quite a lot of people are w... (read more)

This seems false. Dramatic increases in life extension technology have been happening ever since the invention of modern medicine, so its strange to say the field is speculative enough not to even consider.

2
Pablo
To be clear, I was only trying to describe what I believe is going on here, without necessarily endorsing the relative neglect of this cause area. And it does seem many EA folk consider radical life extension a “speculative” way of improving global health, whether or not they are justified in this belief. 

I agree with your conclusion but disagree about your reasoning. I think its perfectly fine and should be encouraged to make advances in conceptual clarification which confuse people. Clarifying concepts can often result in people being confused about stuff they weren’t previously, and this often indicates progress.

  1. My response would be a worse version of Marius’s response. So just read what he said here for my thoughts on hits-based approaches for research.

  2. I disagree, and wish you’d actually explain your position here instead of being vague & menacing. As I’ve said in my previous comment

I will add to my note on (2): In most news articles in which I see Connor or Conjecture mentioned, I feel glad he talked to the relevant reporter, and think he/Conjecture made that article better. It is quite an achievement in my book to have sane conversations with repor

... (read more)

(cross-posted to LessWrong)

I agree with Conjecture's reply that this reads more like a hitpiece than an even-handed evaluation.

I don't think your recommendations follow from your observations, and such strong claims surely don't follow from the actual evidence you provide. I feel like your criticisms can be summarized as the following:

  1. Conjecture was publishing unfinished research directions for a while.

  2. Conjecture does not publicly share details of their current CoEm research direction, and that research direction seems hard.

  3. Conjecture told the gov

... (read more)
8
Omega
Regarding your specific concerns about our recommendations:  1) We address this point in our response to Marius (5th paragraph)   2) As we note in the relevant section: “We think there is a reasonable risk that Connor and Conjecture’s outreach to policymakers and media is alarmist and may decrease the credibility of x-risk.” This kind of relationship-building is unilateralist when it can decrease goodwill amongst policymakers. 3) To be clear, we do not expect Conjecture to have the same level of “organizational responsibility” or “organizational competence” (we aren’t sure what you mean by those phrases and don’t use them ourselves) as OpenAI or Anthropic. Our recommendation was for Conjecture to have a robust corporate governance structure. For example, they could change their corporate charter to implement a "springing governance" structure such that voting equity (but not political equity) shift to an independent board once they cross a certain valuation threshold. As we note in another reply, Conjecture’s infohazard policy has no legal force, and therefore is not as strong as either OpenAI or Anthropic’s corporate governance models. As we’ve noted already, we have concerns about both OpenAI and Anthropic despite having these models in place: Conjecture doesn’t even have those, which makes us more concerned.  [Note: we edited point 3) for clarity on June 13 2023]

My impression is that immigration policy is unusually difficult to effect given how much of a hot-button issue it is in the US (ironic, given your forum handle). So while the scale may be large, I’m skeptical of the tractability.

On OpenPhil’s behavior, yeah, if they’re making it much easier for AI labs to hire talent abroad, then they’re doing a mistake, but that path from all-cause increases in high skill immigration to AI capabilities increases has enough noise that the effects here may be diffuse enough to ignore. There’s also the case that AI safety be... (read more)

It seems altruistically very bad to invest in companies because you expect them to profit if they perform an action with a significant chance of ending the world. I am uncertain why this is on the EA forum.

2
Dawn Drescher
I imagine that at any point in time either big tech or AI safety orgs/funders are cash-constrained. Or maybe that at any point in time we’ll have an estimate which party will be more cash-constrained during the crunch time. When the estimate shows that safety efforts will be more cash-constrained, then it stands to reason that we should mission-hedge by investing (in some smart fashion) in big tech stock. If the estimate shows that big tech will be more cash-constrained (e.g., because the AI safety bottlenecks are elsewhere entirely), then it stands to reason that we should perhaps even divest from big tech stock, even at a loss. But if we’re in a situation where it doesn’t seem sensible to divest, then investing is probably also not so bad at the current margin. I’m leaning towards thinking that investing is not so bad at the current margin, but I was surprised by the magnitude of the effect of divesting according to Paul Christiano’s analysis, so I could easily be wrong about that.
4
Eevee🔹
I believe this section of this post by Zvi addresses your concern about this: The post was published on 1 March 2023, but it seems like the entire tech sector has been more cash constrained recently, what with the industry-wide layoffs. I think this question is worth discussing, but I downvoted your comment for suggesting that my post does not belong on the EA Forum.
2
Erin
I believe this paper gives a clear picture of why it would be advantageous for a charity to invest in the very companies they are trying to stop. In short, there is not really any evidence that investing in a publicly traded stock materially effects the underlying company in any way. However, investing in the stock of a company you are opposed to provides a hedge against that company's success.
1
Jeroen Willems🔸
Yeah I've asked the same question (why invest in AI companies when we think they're harmful?) twice before but didn't get any good answers.
-2
Ward A
Right? I might be misunderstanding something about investing, but my presumption was that if you invest in a company, you help it do more of what it does. Please do correct me if I'm wrong.

Public sentiment is already mostly against AI when public sentiment has an opinion. Though its not a major political issue (yet) so people may not be thinking about it. If it turns into a major political issue (there are ways of regulating AI without turning it into a major political issue, and you probably want to do so), then it will probably become 50/50 due to what politics does to everything.

Ah, ok. Why don't you just respond with markets then!

You can argue that the theorems are wrong, or that the explicit assumptions of the theorems don't hold, which many people have done, but like, there are still coherence theorems, and IMO completeness seems quite reasonable to me and the argument here seems very weak (and I would urge the author to create an actual concrete situation that doesn't seem very dumb in which a highly intelligence, powerful and economically useful system has non-complete preferences).

If you want to see an example of this, I suggest John's post here.

Working on it.

Spoiler (don't read if you want to work on a fun puzzle or test your alignment metal).

7
Habryka [Deactivated]
Oh, nice, I do remember really liking that post. It's a great example, though I think if you bring in time and trade-in-time back into this model you do actually get things that are more VNM-shaped again. But overall I am like "OK, I think that post actually characterizes how coherence arguments apply to agents without completeness quite well", and am also like "yeah, and the coherence arguments still apply quite strongly, because they aren't as fickle or as narrow as the OP makes them out to be".  But overall, yeah, I think this post would be a bunch stronger if it used the markets example from John's post. I like it quite a bit, and I remember using it as an intuition pump in some situations that I somewhat embarrassingly failed to connect to this argument.
6
Elliott Thornley (EJT)
I cite John in the post!

This effectively reads as “I think EA is good at being a company, so my company is going to be a company”. Nobody gives you $1B for being a company. People generally give you money for doing economically valuable things. What economically valuable thing do you imagine doing?

1
DAOMaximalist
For one let me separate EffectiveCauses (my organization) from the DAO (which is an idea from my organization). The DAO is/will be a separate entity not under my control and  members are supposed to work together to collectively raise the $1 B using various methods which I listed in the above post. 80% of this is meant to give out as grants for EA projects. 20% covers other things including 4% for EffectiveCauses and 8% for admin costs for the DAO. I'm not actually asking anyone to hand over one billion to me!   Regarding the [economic] importance, for the DAO, the importance is giving out grants to impactful projects selected democratically by DAO members under the guidance of experts. For my EffectiveCauses organization, the value is as described in the post which is to think up impactful ideas (like this DAO idea) and source for funds to execute and incubate them. Also to serve as a base for setting up  EA communities in countries in Africa (most of which there's little or no EA activity)

I’m not assuming its a scam, and seems unlikely it’d damage the reputation of EA. Seems like a person who got super enthusiastic about a particular governance idea they had, and had a few too many conversations about how to pitch well.

I would recommend, when making a startup, you have a clear idea of what your startup would actually do, which takes into account your own & your company’s strengths & weaknesses & comparative advantage. Many want to make money, those who succeed usually have some understanding of how (even if later they end up radically pivoting to something else).

1
DAOMaximalist
Thank you for the recommendation. I always try to improve and I find this input valuable. I did try my best to explain what the whole thing is about and tried to be as clear as possible but maybe I didn't communicate it clearly enough. Nonetheless, everything I'm doing, both in the DAO and for my EffectiveCauses all center around what I believe are my/our greatest strengths which is community building and managing online communities for which I have many years experience. But I will sure try to make my points clearer in the future. Thank you very much. Would also love to hear any other recommendations you might have.

I know for one that computer system security and consensus mechanisms for crypto rely on proofs and theorems to guide them. It is a common when you want a highly secure computer system to provably verify its security, and consensus mechanisms rely much on mechanism design. Similarly for counter-intelligence: cryptography is invaluable in this area.

I agree with this, except when you tell me I was eliding the question (and, of course, when you tell me I was misattributing blame). I was giving a summary of my position, not an analysis which I think would be deep enough to convince all skeptics.

2
Davidmanheim
You say you agree, but I was asking questions about what you were claiming and who you were blaming.

Mass Gell-Mann amnesia effect because, say, I may look at others talking about my work or work I know closely, and say "wow! That's wrong", but look at others talking about work I don't know closely and say "wow! That implies DOOM!" (like dreadfully wrong corruptions of the orthogonality thesis), and so decide to work on work that seems relevant to that DOOM?

3
Arepo
Yeah, basically that. Even if those same people ultimately find much more convincing (or at least less obviously flawed) arguments, I still worry about the selection effects Nuno mentioned in his thread.

Do you disagree, assuming my writeup provides little information or context to you?

3
Arepo
I don't feel qualified to say. My impression of Anthropic's epistemics is weakly negative (see here), but I haven't read any of their research, but my prior is relatively high AI scepticism. Not because I feel like I understand anything about the field, but because every time I do engage with some small part of the dialogue, it seems totally unconvincing (see same comment), so I have the faint suspicion many of the people worrying about AI safety (sometimes including me) are subject to some mass-Gell-Mann amnesia effect.

Basically, there are simple arguments around 'they are an AGI capabilities organization, so obviously they're bad', and more complicated arguments around 'but they say they want to do alignment work', and then even more complicated arguments on those arguments going 'well, actually it doesn't seem like their alignment work is all that good actually, and their capabilities work is pushing capabilities, and still makes it difficult for AGI companies to coordinate to not build AGI, so in fact the simple arguments were correct'. Getting more into depth would require a writeup of my current picture of alignment, which I am writing, but which is difficult to convey via a quick comment.

3
Arepo
I upvoted and did not disagreevote this, for the record. I'll be interested to see your writeup :)

I could list my current theories about how these problems are interrelated, but I fear such a listing would anchor me to the wrong one, and too many claims in a statement produces more discussion around minor sub-claims than major points (an example of a shallow criticism of EA discussion norms).

The decisions which caused the FTX catastrophe, the fact that EA is counterfactually responsible for the three primary AGI labs, Anthropic being entirely run by EAs yet still doing net negative work, and the funding of mostly capabilities oriented ML work with vague alignment justifications (and potentially similar dynamics in biotech which are more speculative for me right now), with the creation of GPT and[1] RLHF as particular examples of this.


  1. I recently found out that GPT was not in fact developed for alignment work. I had gotten confused with some

... (read more)
8
Robi Rahman🔸
EAs are counterfactually responsible for DeepMind?

Strong disagree for misattributing blame and eliding the question.

To the extent that "EA is counterfactually responsible for the three primary AGI labs," you would need to claim that the ex-ante expected value of specific decisions was negative, and that those decisions were because of EA, not that it went poorly ex-post. Perhaps you can make those arguments, but you aren't. 

Ditto for "The decisions which caused the FTX catastrophe" - Whose decisions, where does the blame go, and to what extent are they about EA? SBF's decision to misappropriate funds, or fraudulently misrepresent what he did? CEA not knowing about it? OpenPhil not investigating? Goldman Sachs doing a bad job with due diligence?

3
Arepo
Off topic, but can you clarify why you think Anthropic does net negative work?
1
D0TheMath
I could list my current theories about how these problems are interrelated, but I fear such a listing would anchor me to the wrong one, and too many claims in a statement produces more discussion around minor sub-claims than major points (an example of a shallow criticism of EA discussion norms).

EAs should read more deep critiques of EA, especially external ones

  • For instance this blog and this forthcoming book

The blog post and book linked do not seem likely to me to discuss "deep" critiques of EA. In particular, I don't think the problem with the most harmful parts of EA are caused by racism or sexism or insufficient wokeism.

In general, I don't think many EAs, especially very new EAs with little context or knowledge about the community, are capable of recognizing "deep" from "shallow" criticisms, I also expect them to be overly optimistic a... (read more)

3
Chris Leong
Agreed. It takes quite a bit of context to recognise the difference between deep critiques and shallow ones, whilst everyone will see their critique as a deep critique.
7
Robi Rahman🔸
What do you think are the most harmful parts of EA?

Eh, I don’t think this is a priors game. Quintin has lots of information, I have lots of information, so if we were both acting optimally according to differing priors, our opinions likely would have converged.

In general I’m skeptical of arguments of disagreement which reduce things to differing priors. It’s just not physically or predictively correct, and it feels nice because now you no longer have an epistemological duty to go and see why relevant people have differing opinions.

3
Lukas_Gloor
That would be a valid reply if I had said it's all about priors. All I said was that I think priors make up a significant implicit source of the disagreement – as suggested by some people thinking 5% risk of doom seems "high" and me thinking/reacting with "you wouldn't be saying that if you had anything close to my priors." Or maybe what I mean is stronger than "priors." "Differences in underlying worldviews" seems like the better description. Specifically, the worldview I identify more with, which I think many EAs don't share, is something like "The Yudkowskian worldview where the world is insane, most institutions are incompetent, Inadequate Equilibria is a big deal, etc." And that probably affects things like whether we anchor way below 50% or above 50% on what the risks should be that the culmination of accelerating technological progress will go well or not. That's misdescribing the scope of my point and drawing inappropriate inferences. Last time I made an object-level argument about AI misalignment risk was  just 3h before your comment. (Not sure it's particularly intelligible, but the point is, I'm trying! :) )  So, evidently, I agree that a lot of the discussion should be held at a deeper level than the one of priors/general worldviews. I'm a fan of Shard theory and some of the considerations behind it have already updated me towards a lower chance of doom than I had before starting to incorporate it more into my thinking. (Which I'm still in the process of doing.)
Load more