All of D0TheMath's Comments + Replies

There's much thought in finance about this. Some general books are:

  1. Options, Futures, and Other Derivatives

  2. Principles of Corporate Finance

And more particularly, The Black Swan: The Impact of the Highly Improbable, along with other stuff by Taleb (this is kind-of his whole thing).

The same standards applied to anything else: A decent track record of such experiments succeeding, and/or well-supported argument based on (in this case) sound economics.

So far the track-record is heavily against. Indeed, many of the worst calamities in history took the form of "revolution".

In lieu of that track record, you need one hell of an argument to explain why your plan is better, which at the minimum likely requires basing it on sound economics (which, if you want particular pointers, mostly means Chicago school, but sufficiently good complexity economics would also be fine).

3
Friso
23d
I don't feel like you are taking my question head-on. I'm asking you to envision something that could convince you about systems other than capitalism. Just saying successful experiments and well-based arguments feels like you are evading my question a little. Especially because it is not clear how such an experiment could succeed (as in, what result are you're looking for?), or what kind of arguments would be convincing. What if a more just world would require that we end the state of global inequality, and that simply requires lowering the standard of living in much of the Global North? Or if justice requires that we end the extraction of much of the planet's resources? I assume an experiment that tells you some people might need to become much worse off for justice to occur will not convince you.  I also don't understand why that would mostly mean Chicago school. To change your mind on capitalism, you need a specific school to change their mind first? And not some school, but one that was 1) explicitly set up to defend laissez-faire capitalism, 2) largely relies on simplifying assumptions we simply know to be false (rational choice, perfect markets, etc.), 3) is incredibly controversial even for many mainstream economists (e.g. even Paul Krugman called it "the product of a Dark Age of macroeconomics in which hard-won knowledge has been forgotten") and for environmentalists especially?

It makes me sad that I automatically double timing estimates from EA orgs, treat that as the absolute minimum time something could take, and am often still disappointed.

I definitely strongly agree with this. I do think its slowly, ever so slowly getting better though.

More broadly I think Anthropic, like many, hasn’t come to final views on these topics and is working on developing views, probably with more information and talent than most alternatives by virtue of being a well-funded company.

It would be remiss to not also mention the large conflict of interest analysts at Anthropic have when developing these views.

I do dislike this feature of EA, but I don't think the solution is to transition away from a one-grant-at-a-time model. Probably better would be to have exit-coaches to help EAs find a new career outside EA, if they built up a bunch of skills because funding sources or other generally EA-endorsed sources told them they would give them money if they used such skills for the benefit of the universe.

What talents do you think aren't applicable outside the EAsphere?

(Edit: I do also note that I believe 80k should be taken a lot less seriously than they present themselves, and that most EAs take them. Their incorrect claims of EA being talent constrained one of many reasons I distrust them)

I'm not sure what the solution is - more experimentation seems generally like a good idea, but EA fundmakers seem quite conservative in the way they operate, at least once they've locked in a modus operandi.

For what it's worth, my instinct is to try a model with more 'grantmakers' who take a more active, product-managery/ownery role, where they make  fewer grants, but the grants are more like contracts of employment, such that the grantmakers take some responsibility for the ultimate output (and can terminate a contract like a normal employer if the '... (read more)

Recommendation: A collection of paradoxes dealing with Utilitarianism. This seems to me to be what you wrote, and would have had me come to the post with more of a “ooo! Fun philosophy discussion” rather than “well, thats a very strong claim… oh look at that, all so called inconsistencies and irrationalities either deal with weird infinite ethics stuff or are things I can’t understand. Time to be annoyed about how the headline is poorly argued for.” the latter experience is not useful or fun, the former nice depending on the day & company.

-1
MichaelStJules
7mo
Thanks for the feedback! I think your general point can still stand, but I do want to point out that the results here don't depend on actual infinities (infinite universe, infinitely long lives, infinite value), which is the domain of infinite ethics. We only need infinitely many possible outcomes and unbounded but finite value. My impression is that this is a less exotic/controversial domain (although I think an infinite universe shouldn't be controversial, and I'd guess our universe is infinite with probability >80%). Furthermore, impossibility results in infinite ethics are problematic for everyone with impartial intuitions, but the results here seem more problematic for utilitarianism in particular. You can keep Impartiality and Pareto and/or Separability in deterministic + unbounded but finite cases here, but when extending to uncertain cases, you wouldn't end up with utilitarianism, or you'd undermine utilitarianism in doing so. You can't extend both Impartiality and Pareto to infinite cases (allowing arbitrary bijections or swapping infinitely many people in Impartiality), and this is a problem for everyone sympathetic to both principles, not just utilitarians.

My understanding of history says that usually letting militaries have such power, or initiating violent overthrow via any other means to launch an internal rebellion leads to bad results. Examples include the French, Russian, and English revolutions. Counterexamples possibly include the American Revolution, though notably I struggle to point to anything concrete that would have been different about the world had America had a peaceful break off like Canada later on did.

Do you know of counter-examples, maybe relating to poor developing nations which after the rebellion became rich developed nations?

2
Alimi
7mo
Thanks @ DOTheMath for sharing your understanding of the history of revolutions and military coups. You mentioned "revolution" which is categorically different from " military coups". Since revolutions usually had the support of both the middle class and the lower class including some portions of the elites, definitely they produced systemic changes and improved governance. Remember, most of the revolutions you mentioned succeeded because they were no or little vested interested to thwart the outcome of such revolutions as we see with the subversive meddling of an international syndicates in African contexts. Burkina Faso under Sankara was on it way to development, but he was coldly assassinated because it would be a bad precedent and insult for pro-democrats in the West to have a military regime offer better alternative for development. In the history of coups in Africa, 60 % of them had been orchestrated by foreign powers ( France in Central African Republic, Chad, Togo, DRC , Gabon while the US seems to be discreet and diplomatic in its support for military takeover see the recent coup in Never Republic). The point about this failure is due to the fact that vested interests would start threatening to sanctions such regimes, thus not allowing them achieve their mission: offer viable alternative. Conclusion , military takeovers are not intrinsically bad as evidenced in the support their received in some cases where what is now called a palace coup is orchestrated to remove a democratically elected president by the support and approval of Western governments. What do you say of the ousting of Muhammad Morsi of Egypt? Overall, the last time I checked, I saw no real democracy anywhere in the world. Let's take the United States of America for instance. How many political parties exist there? How many of them determine the federal policies? How many democracies embrace multiparty system? Majorly, we are having weak coalitions hiding behind single party system draining d

I think Habryka has mentioned that Lightcone could withstand a defamation suit, so there’s not a high chance of financially ruining him. I am tentatively in agreement otherwise though.

True! But for the record I definitely don't have remotely enough personal wealth to cover such a suit. So if libel suits are permissible then you may only hear about credible accusations from people on teams who are willing to back the financial cost, the number of which in my estimation is currently close to 1.

Added: I don't mean to be more pessimistic than is accurate. I am genuinely uncertain to what extent people will have my back if a lawsuit comes up (Manifold has it at 13%), and my uncertainty range does include "actually quite a lot of people are w... (read more)

This seems false. Dramatic increases in life extension technology have been happening ever since the invention of modern medicine, so its strange to say the field is speculative enough not to even consider.

2
Pablo
9mo
To be clear, I was only trying to describe what I believe is going on here, without necessarily endorsing the relative neglect of this cause area. And it does seem many EA folk consider radical life extension a “speculative” way of improving global health, whether or not they are justified in this belief. 

I agree with your conclusion but disagree about your reasoning. I think its perfectly fine and should be encouraged to make advances in conceptual clarification which confuse people. Clarifying concepts can often result in people being confused about stuff they weren’t previously, and this often indicates progress.

  1. My response would be a worse version of Marius’s response. So just read what he said here for my thoughts on hits-based approaches for research.

  2. I disagree, and wish you’d actually explain your position here instead of being vague & menacing. As I’ve said in my previous comment

I will add to my note on (2): In most news articles in which I see Connor or Conjecture mentioned, I feel glad he talked to the relevant reporter, and think he/Conjecture made that article better. It is quite an achievement in my book to have sane conversations with repor

... (read more)

(cross-posted to LessWrong)

I agree with Conjecture's reply that this reads more like a hitpiece than an even-handed evaluation.

I don't think your recommendations follow from your observations, and such strong claims surely don't follow from the actual evidence you provide. I feel like your criticisms can be summarized as the following:

  1. Conjecture was publishing unfinished research directions for a while.

  2. Conjecture does not publicly share details of their current CoEm research direction, and that research direction seems hard.

  3. Conjecture told the gov

... (read more)
8
Omega
10mo
Regarding your specific concerns about our recommendations:  1) We address this point in our response to Marius (5th paragraph)   2) As we note in the relevant section: “We think there is a reasonable risk that Connor and Conjecture’s outreach to policymakers and media is alarmist and may decrease the credibility of x-risk.” This kind of relationship-building is unilateralist when it can decrease goodwill amongst policymakers. 3) To be clear, we do not expect Conjecture to have the same level of “organizational responsibility” or “organizational competence” (we aren’t sure what you mean by those phrases and don’t use them ourselves) as OpenAI or Anthropic. Our recommendation was for Conjecture to have a robust corporate governance structure. For example, they could change their corporate charter to implement a "springing governance" structure such that voting equity (but not political equity) shift to an independent board once they cross a certain valuation threshold. As we note in another reply, Conjecture’s infohazard policy has no legal force, and therefore is not as strong as either OpenAI or Anthropic’s corporate governance models. As we’ve noted already, we have concerns about both OpenAI and Anthropic despite having these models in place: Conjecture doesn’t even have those, which makes us more concerned.  [Note: we edited point 3) for clarity on June 13 2023]

My impression is that immigration policy is unusually difficult to effect given how much of a hot-button issue it is in the US (ironic, given your forum handle). So while the scale may be large, I’m skeptical of the tractability.

On OpenPhil’s behavior, yeah, if they’re making it much easier for AI labs to hire talent abroad, then they’re doing a mistake, but that path from all-cause increases in high skill immigration to AI capabilities increases has enough noise that the effects here may be diffuse enough to ignore. There’s also the case that AI safety be... (read more)

It seems altruistically very bad to invest in companies because you expect them to profit if they perform an action with a significant chance of ending the world. I am uncertain why this is on the EA forum.

2
Dawn Drescher
1y
I imagine that at any point in time either big tech or AI safety orgs/funders are cash-constrained. Or maybe that at any point in time we’ll have an estimate which party will be more cash-constrained during the crunch time. When the estimate shows that safety efforts will be more cash-constrained, then it stands to reason that we should mission-hedge by investing (in some smart fashion) in big tech stock. If the estimate shows that big tech will be more cash-constrained (e.g., because the AI safety bottlenecks are elsewhere entirely), then it stands to reason that we should perhaps even divest from big tech stock, even at a loss. But if we’re in a situation where it doesn’t seem sensible to divest, then investing is probably also not so bad at the current margin. I’m leaning towards thinking that investing is not so bad at the current margin, but I was surprised by the magnitude of the effect of divesting according to Paul Christiano’s analysis, so I could easily be wrong about that.
4
BrownHairedEevee
1y
I believe this section of this post by Zvi addresses your concern about this: The post was published on 1 March 2023, but it seems like the entire tech sector has been more cash constrained recently, what with the industry-wide layoffs. I think this question is worth discussing, but I downvoted your comment for suggesting that my post does not belong on the EA Forum.
2
Erin
1y
I believe this paper gives a clear picture of why it would be advantageous for a charity to invest in the very companies they are trying to stop. In short, there is not really any evidence that investing in a publicly traded stock materially effects the underlying company in any way. However, investing in the stock of a company you are opposed to provides a hedge against that company's success.
1
Jeroen Willems
1y
Yeah I've asked the same question (why invest in AI companies when we think they're harmful?) twice before but didn't get any good answers.
-2
Ward A
1y
Right? I might be misunderstanding something about investing, but my presumption was that if you invest in a company, you help it do more of what it does. Please do correct me if I'm wrong.

Public sentiment is already mostly against AI when public sentiment has an opinion. Though its not a major political issue (yet) so people may not be thinking about it. If it turns into a major political issue (there are ways of regulating AI without turning it into a major political issue, and you probably want to do so), then it will probably become 50/50 due to what politics does to everything.

Ah, ok. Why don't you just respond with markets then!

You can argue that the theorems are wrong, or that the explicit assumptions of the theorems don't hold, which many people have done, but like, there are still coherence theorems, and IMO completeness seems quite reasonable to me and the argument here seems very weak (and I would urge the author to create an actual concrete situation that doesn't seem very dumb in which a highly intelligence, powerful and economically useful system has non-complete preferences).

If you want to see an example of this, I suggest John's post here.

Working on it.

Spoiler (don't read if you want to work on a fun puzzle or test your alignment metal).

6
Habryka
1y
Oh, nice, I do remember really liking that post. It's a great example, though I think if you bring in time and trade-in-time back into this model you do actually get things that are more VNM-shaped again. But overall I am like "OK, I think that post actually characterizes how coherence arguments apply to agents without completeness quite well", and am also like "yeah, and the coherence arguments still apply quite strongly, because they aren't as fickle or as narrow as the OP makes them out to be".  But overall, yeah, I think this post would be a bunch stronger if it used the markets example from John's post. I like it quite a bit, and I remember using it as an intuition pump in some situations that I somewhat embarrassingly failed to connect to this argument.
6
EJT
1y
I cite John in the post!

This effectively reads as “I think EA is good at being a company, so my company is going to be a company”. Nobody gives you $1B for being a company. People generally give you money for doing economically valuable things. What economically valuable thing do you imagine doing?

1
DAOMaximalist
1y
For one let me separate EffectiveCauses (my organization) from the DAO (which is an idea from my organization). The DAO is/will be a separate entity not under my control and  members are supposed to work together to collectively raise the $1 B using various methods which I listed in the above post. 80% of this is meant to give out as grants for EA projects. 20% covers other things including 4% for EffectiveCauses and 8% for admin costs for the DAO. I'm not actually asking anyone to hand over one billion to me!   Regarding the [economic] importance, for the DAO, the importance is giving out grants to impactful projects selected democratically by DAO members under the guidance of experts. For my EffectiveCauses organization, the value is as described in the post which is to think up impactful ideas (like this DAO idea) and source for funds to execute and incubate them. Also to serve as a base for setting up  EA communities in countries in Africa (most of which there's little or no EA activity)

I’m not assuming its a scam, and seems unlikely it’d damage the reputation of EA. Seems like a person who got super enthusiastic about a particular governance idea they had, and had a few too many conversations about how to pitch well.

I would recommend, when making a startup, you have a clear idea of what your startup would actually do, which takes into account your own & your company’s strengths & weaknesses & comparative advantage. Many want to make money, those who succeed usually have some understanding of how (even if later they end up radically pivoting to something else).

1
DAOMaximalist
1y
Thank you for the recommendation. I always try to improve and I find this input valuable. I did try my best to explain what the whole thing is about and tried to be as clear as possible but maybe I didn't communicate it clearly enough. Nonetheless, everything I'm doing, both in the DAO and for my EffectiveCauses all center around what I believe are my/our greatest strengths which is community building and managing online communities for which I have many years experience. But I will sure try to make my points clearer in the future. Thank you very much. Would also love to hear any other recommendations you might have.

I know for one that computer system security and consensus mechanisms for crypto rely on proofs and theorems to guide them. It is a common when you want a highly secure computer system to provably verify its security, and consensus mechanisms rely much on mechanism design. Similarly for counter-intelligence: cryptography is invaluable in this area.

I agree with this, except when you tell me I was eliding the question (and, of course, when you tell me I was misattributing blame). I was giving a summary of my position, not an analysis which I think would be deep enough to convince all skeptics.

2
Davidmanheim
1y
You say you agree, but I was asking questions about what you were claiming and who you were blaming.

Mass Gell-Mann amnesia effect because, say, I may look at others talking about my work or work I know closely, and say "wow! That's wrong", but look at others talking about work I don't know closely and say "wow! That implies DOOM!" (like dreadfully wrong corruptions of the orthogonality thesis), and so decide to work on work that seems relevant to that DOOM?

3
Arepo
1y
Yeah, basically that. Even if those same people ultimately find much more convincing (or at least less obviously flawed) arguments, I still worry about the selection effects Nuno mentioned in his thread.

Do you disagree, assuming my writeup provides little information or context to you?

3
Arepo
1y
I don't feel qualified to say. My impression of Anthropic's epistemics is weakly negative (see here), but I haven't read any of their research, but my prior is relatively high AI scepticism. Not because I feel like I understand anything about the field, but because every time I do engage with some small part of the dialogue, it seems totally unconvincing (see same comment), so I have the faint suspicion many of the people worrying about AI safety (sometimes including me) are subject to some mass-Gell-Mann amnesia effect.

Basically, there are simple arguments around 'they are an AGI capabilities organization, so obviously they're bad', and more complicated arguments around 'but they say they want to do alignment work', and then even more complicated arguments on those arguments going 'well, actually it doesn't seem like their alignment work is all that good actually, and their capabilities work is pushing capabilities, and still makes it difficult for AGI companies to coordinate to not build AGI, so in fact the simple arguments were correct'. Getting more into depth would require a writeup of my current picture of alignment, which I am writing, but which is difficult to convey via a quick comment.

3
Arepo
1y
I upvoted and did not disagreevote this, for the record. I'll be interested to see your writeup :)

I could list my current theories about how these problems are interrelated, but I fear such a listing would anchor me to the wrong one, and too many claims in a statement produces more discussion around minor sub-claims than major points (an example of a shallow criticism of EA discussion norms).

The decisions which caused the FTX catastrophe, the fact that EA is counterfactually responsible for the three primary AGI labs, Anthropic being entirely run by EAs yet still doing net negative work, and the funding of mostly capabilities oriented ML work with vague alignment justifications (and potentially similar dynamics in biotech which are more speculative for me right now), with the creation of GPT and[1] RLHF as particular examples of this.


  1. I recently found out that GPT was not in fact developed for alignment work. I had gotten confused with some

... (read more)
8
Robi Rahman
1y
EAs are counterfactually responsible for DeepMind?

Strong disagree for misattributing blame and eliding the question.

To the extent that "EA is counterfactually responsible for the three primary AGI labs," you would need to claim that the ex-ante expected value of specific decisions was negative, and that those decisions were because of EA, not that it went poorly ex-post. Perhaps you can make those arguments, but you aren't. 

Ditto for "The decisions which caused the FTX catastrophe" - Whose decisions, where does the blame go, and to what extent are they about EA? SBF's decision to misappropriate funds, or fraudulently misrepresent what he did? CEA not knowing about it? OpenPhil not investigating? Goldman Sachs doing a bad job with due diligence?

3
Arepo
1y
Off topic, but can you clarify why you think Anthropic does net negative work?
1
D0TheMath
1y
I could list my current theories about how these problems are interrelated, but I fear such a listing would anchor me to the wrong one, and too many claims in a statement produces more discussion around minor sub-claims than major points (an example of a shallow criticism of EA discussion norms).

EAs should read more deep critiques of EA, especially external ones

  • For instance this blog and this forthcoming book

The blog post and book linked do not seem likely to me to discuss "deep" critiques of EA. In particular, I don't think the problem with the most harmful parts of EA are caused by racism or sexism or insufficient wokeism.

In general, I don't think many EAs, especially very new EAs with little context or knowledge about the community, are capable of recognizing "deep" from "shallow" criticisms, I also expect them to be overly optimistic a... (read more)

3
Chris Leong
1y
Agreed. It takes quite a bit of context to recognise the difference between deep critiques and shallow ones, whilst everyone will see their critique as a deep critique.
7
Robi Rahman
1y
What do you think are the most harmful parts of EA?

Eh, I don’t think this is a priors game. Quintin has lots of information, I have lots of information, so if we were both acting optimally according to differing priors, our opinions likely would have converged.

In general I’m skeptical of arguments of disagreement which reduce things to differing priors. It’s just not physically or predictively correct, and it feels nice because now you no longer have an epistemological duty to go and see why relevant people have differing opinions.

3
Lukas_Gloor
1y
That would be a valid reply if I had said it's all about priors. All I said was that I think priors make up a significant implicit source of the disagreement – as suggested by some people thinking 5% risk of doom seems "high" and me thinking/reacting with "you wouldn't be saying that if you had anything close to my priors." Or maybe what I mean is stronger than "priors." "Differences in underlying worldviews" seems like the better description. Specifically, the worldview I identify more with, which I think many EAs don't share, is something like "The Yudkowskian worldview where the world is insane, most institutions are incompetent, Inadequate Equilibria is a big deal, etc." And that probably affects things like whether we anchor way below 50% or above 50% on what the risks should be that the culmination of accelerating technological progress will go well or not. That's misdescribing the scope of my point and drawing inappropriate inferences. Last time I made an object-level argument about AI misalignment risk was  just 3h before your comment. (Not sure it's particularly intelligible, but the point is, I'm trying! :) )  So, evidently, I agree that a lot of the discussion should be held at a deeper level than the one of priors/general worldviews. I'm a fan of Shard theory and some of the considerations behind it have already updated me towards a lower chance of doom than I had before starting to incorporate it more into my thinking. (Which I'm still in the process of doing.)

Yeah, he’s working on it, but its not his no. 1 priority. He developed shard theory.

Totally agree with everything in here!

I also like the framing: Status-focused thinking was likely very highly selected for in the ancestral environment, and so when your brain comes up with status-focused justifications for various plans, you should be pretty skeptical about whether it is actually focusing on status as an instrumental goal toward your intrinsic goals, or as an intrinsic goal in itself. Similar to how you would be skeptical of your brain for coming up with justifications in favor of why its actually a really good idea to hire that really sexy girl/guy interviewing for a position who analyzed objectively is a doofus.

Scared as in, like, 10-15% in the next 50 years assuming we don't all die.

I think the current arms-length community interaction is good, but mostly because I'm scared EAs are going to do something crazy which destroys the movement, and that Lesswrongers will then be necessary to start another spinoff movement which fills the altruistic gap. If Lesswrong is too close to EA, then EA may take down Lesswrong with it.

Lesswrongers seem far less liable to play with metaphorical fire than EAs, given less funding, better epistemics, less overall agency, and fewer participants.

1
D0TheMath
1y
Scared as in, like, 10-15% in the next 50 years assuming we don't all die.

I disagree-voted.

I think pure open dialogue is often good for communities. You will find evidence for this if you look at most any social movement, the FTX fiasco, and immoral mazes.

Most long pieces of independent research that I see are made by open-phil, and I see far more EAs deferring to open-phil's opinion on a variety of subjects than Lesswrongers. Examples that come to mind from you would be helpful.

It was originally EAs who used such explicit expected value calculations during Givewell periods, and I don't think I've ever seen an EV calculation don... (read more)

I strong downvoted this because I don't like online discussions that devolve into labeling things as cringe or based. I usually replace such words with low/high status, and EA already has enough of that noise.

3
ChanaMessinger
1y
I think I like this norm where people say what they voted and why (not always, but on the margin). Not 100% it would be better for more people to do, but I like what you did here.

I like this, and think its healthy. I recommend talking to Quintin Pope for a smart person who has thought a lot about alignment, and came to the informed, inside-view conclusion that we have a 5% chance of doom (or just reading his posts or comments). He has updated me downwards on doom a lot.

Hopefully it gets you in a position where you're able to update more on evidence that I think is evidence, by getting you into a state where you have a better picture of what the best arguments against doom would be.

2
Arepo
1y
What outcome does he specifically predict 5% probability of?

Is 5% low? 5% still strikes me as a "preventing this outcome should plausibly be civilization's #1 priority" level of risk.

3
NunoSempere
1y
Thanks!

I find myself disliking this comment, and I think its mostly because it sounds like you 1) agree with many of the blunders Rob points out, yet 2) don’t seem to have learned anything from your mistake here? I don’t think many do or should blame you, and I’m personally concerned about repeated similar blunders on your part costing EA much loss of outside reputation and internal trust.

Like, do you think that the issue was that you were responding in heat, and if so, will you make a future policy of not responding in heat in future similar situations?

I feel li... (read more)

7
RobBensinger
1y
FWIW, I don't really want Shakeel to rush into making public promises about his future behavior right now, or big public statements about long-term changes to his policies and heuristics, unless he finds that useful for some reason. I appreciated hearing his thoughts, and would rather leave him space to chew on things and figure out what makes sense for himself. If he or CEA make the wrong updates by my lights, then I expect that to be visible in future CEA/Shakeel actions, and I can just wait and criticize those when they happen.

It would not surprise me if most HR departments are set up as the result of lots of political pressures from various special interests within orgs, and that they are mostly useless at their “support” role.

With more confidence, I’d guess a smart person could think of a far better way to do support that looks nothing like an HR department.

I think MATS would be far better served by ignoring the HR frame, and just trying to rederive all the properties of what an org which does support well would look like. The above post looks like a good start, but it’d be a ... (read more)

2
Ryan Kidd
1y
I'm not advocating a stock HR department with my comment. I used "HR" as a shorthand for "community health agent who is focused on support over evaluation." This is why I didn't refer to HR departments in my post. Corporate HR seems flawed in obvious ways, though I think it's probably usually better than nothing, at least for tail risks.

Seems like that is just a bad argument, and can be solved with saying “well that’s obviously wrong for obvious, commonsense reasons” and if they really want to, they can make a spreadsheet, fill it in with the selection pressures they think they’re causing, and see for themselves that indeed its wrong.

The argument I’m making is that most of the examples you gave I thought “that’s a dumb argument”. And if people are consistently making transparently dumb selection arguments, this seems different from people making subtly dumb selection arguments, like econo... (read more)

I don’t buy any of the arguments you said at the top of the post, except for toxoplasma of rage (with lowish probability) and evaporative cooling. But both of these (to me) seem like a description of an aspect of a social dynamic, not the aspect. And currently not very decision relevant.

Like, obviously they’re false. But are they useful? I think so!

I’d be interested in different, more interesting or decision relevant or less obvious mistakes you often see.

1
Nathan_Barnard
1y
I suppose I think the example I gave where someone I know doing selections for an important EA program didn't include questions about altruism because they thought that adverse selection effects were sufficiently bad. 

I feel like you may be preaching to the choir here, but agree with the sentiment (modulo thinking people should do more of whatever is the best thing on the margin).

Nevermind, I see its a crosspost.

Overall, I think the Progress studies community seems decently aligned with what EAs care about, and could become more-so in the coming years. The event had decent epistemics and was less intimidating than an EA conference. I think many people who feel that EA is too intense, cares too much about longtermism, or uses too much jargon could find progress studies as a suitable alternative. If the movement known as EA dissolved (God forbid) I think progress studies could absorb many of the folks.

I'm curious about how you think this will develop. It seems li... (read more)

8
Nick Corvino
1y
You know that's what I thought as well, but I've found the community to be more open to caution than I initially thought. Derek Thompson in particular (the main organizer for the event) harped on safety quite a bit. And if more EAs got involved (assuming they don't get amnesia) I assume they can carry over some of these concerns and shift the culture. 

Hm. I think I mostly don’t think people are good at doing that kind of reasoning. Generally when I see it in the wild, it seems very naive.

I’d like to know if you, factoring in optics into your EV calcs, see any optics mistakes EA is currently making which haven’t already blown up, and that (say) Rob Bensinger probably can’t given he’s not directly factoring in optics to his EV calcs.

I think optics concerns are corrosive in the same way that PR concerns are. I quite like Rob Bensinger's perspective on this, as well as Anna's "PR" is corrosive, reputation is not.

I'd like to know what you think of these strategies. Notably, I think they defend against SBF, but not against Wytham Abbey type stuff, and conditional on Wytham Abbey being an object-level smart purchase, I think that's a good thing.

6
freedomandutility
1y
I like both perspectives you linked, but I think Rob is preventing a false binary between being virtuous / following deontological rule and optimising for optics. I think optics should be factored into EV calculations, but we should definitely not be optimising for optics.  I think my ideal approach to EA's reputation would be - encourage EAs to follow the law, to reject physical violence, to be virtuous and rule following, and then to also factor in optics as one factor in EV calculations for decision making.

I wouldn’t advocate for engineering species to be sapient (in the sense of having valenced experiences), but for those that already are, it seems sad they don’t have higher ceilings for their mental capabilities. Like having many people condemned to never develop past toddlerhood.

edit: also, this is a long-term goal. Not something I think makes sense to make happen now.

I wish people would stop optimizing their titles for what they think would be engaging to click on. I usually downvote such posts once I realize what was done.

I ended up upvoting this one bc I think it makes an important point.

I interpreted “ eliminate natural ecosystems” as more like eliminating global poverty in the human analogy. Seems bad to do a mass killing of all animals, and better to just make their lives very good, and give them the ability to mentally develop past mental ages of 3-7.

3
Will Bradshaw
1y
Well, that sentence turned sharply midway through. I'm not sure about the last part. If I wanted to create lots more intelligent beings, genetically engineering a bunch of different species to be sapient seems like a rather labour-intensive route. I agree that a lot turns on your interpretation of the word "eliminate" in the original comment.

If done immediately, this seems like it’d severely curtail humanity’s potential. But at some point in the future, this seems like a good idea.

You should make manifold markets predicting what you’ll think of these questions in a year or 5 years.

Load more