This seems false. Dramatic increases in life extension technology have been happening ever since the invention of modern medicine, so its strange to say the field is speculative enough not to even consider.
I agree with your conclusion but disagree about your reasoning. I think its perfectly fine and should be encouraged to make advances in conceptual clarification which confuse people. Clarifying concepts can often result in people being confused about stuff they weren’t previously, and this often indicates progress.
My response would be a worse version of Marius’s response. So just read what he said here for my thoughts on hits-based approaches for research.
I disagree, and wish you’d actually explain your position here instead of being vague & menacing. As I’ve said in my previous comment
...I will add to my note on (2): In most news articles in which I see Connor or Conjecture mentioned, I feel glad he talked to the relevant reporter, and think he/Conjecture made that article better. It is quite an achievement in my book to have sane conversations with repor
(cross-posted to LessWrong)
I agree with Conjecture's reply that this reads more like a hitpiece than an even-handed evaluation.
I don't think your recommendations follow from your observations, and such strong claims surely don't follow from the actual evidence you provide. I feel like your criticisms can be summarized as the following:
Conjecture was publishing unfinished research directions for a while.
Conjecture does not publicly share details of their current CoEm research direction, and that research direction seems hard.
Conjecture told the gov
My impression is that immigration policy is unusually difficult to effect given how much of a hot-button issue it is in the US (ironic, given your forum handle). So while the scale may be large, I’m skeptical of the tractability.
On OpenPhil’s behavior, yeah, if they’re making it much easier for AI labs to hire talent abroad, then they’re doing a mistake, but that path from all-cause increases in high skill immigration to AI capabilities increases has enough noise that the effects here may be diffuse enough to ignore. There’s also the case that AI safety be...
It seems altruistically very bad to invest in companies because you expect them to profit if they perform an action with a significant chance of ending the world. I am uncertain why this is on the EA forum.
Public sentiment is already mostly against AI when public sentiment has an opinion. Though its not a major political issue (yet) so people may not be thinking about it. If it turns into a major political issue (there are ways of regulating AI without turning it into a major political issue, and you probably want to do so), then it will probably become 50/50 due to what politics does to everything.
You can argue that the theorems are wrong, or that the explicit assumptions of the theorems don't hold, which many people have done, but like, there are still coherence theorems, and IMO completeness seems quite reasonable to me and the argument here seems very weak (and I would urge the author to create an actual concrete situation that doesn't seem very dumb in which a highly intelligence, powerful and economically useful system has non-complete preferences).
If you want to see an example of this, I suggest John's post here.
This effectively reads as “I think EA is good at being a company, so my company is going to be a company”. Nobody gives you $1B for being a company. People generally give you money for doing economically valuable things. What economically valuable thing do you imagine doing?
I’m not assuming its a scam, and seems unlikely it’d damage the reputation of EA. Seems like a person who got super enthusiastic about a particular governance idea they had, and had a few too many conversations about how to pitch well.
I would recommend, when making a startup, you have a clear idea of what your startup would actually do, which takes into account your own & your company’s strengths & weaknesses & comparative advantage. Many want to make money, those who succeed usually have some understanding of how (even if later they end up radically pivoting to something else).
I know for one that computer system security and consensus mechanisms for crypto rely on proofs and theorems to guide them. It is a common when you want a highly secure computer system to provably verify its security, and consensus mechanisms rely much on mechanism design. Similarly for counter-intelligence: cryptography is invaluable in this area.
I agree with this, except when you tell me I was eliding the question (and, of course, when you tell me I was misattributing blame). I was giving a summary of my position, not an analysis which I think would be deep enough to convince all skeptics.
Mass Gell-Mann amnesia effect because, say, I may look at others talking about my work or work I know closely, and say "wow! That's wrong", but look at others talking about work I don't know closely and say "wow! That implies DOOM!" (like dreadfully wrong corruptions of the orthogonality thesis), and so decide to work on work that seems relevant to that DOOM?
Basically, there are simple arguments around 'they are an AGI capabilities organization, so obviously they're bad', and more complicated arguments around 'but they say they want to do alignment work', and then even more complicated arguments on those arguments going 'well, actually it doesn't seem like their alignment work is all that good actually, and their capabilities work is pushing capabilities, and still makes it difficult for AGI companies to coordinate to not build AGI, so in fact the simple arguments were correct'. Getting more into depth would require a writeup of my current picture of alignment, which I am writing, but which is difficult to convey via a quick comment.
I could list my current theories about how these problems are interrelated, but I fear such a listing would anchor me to the wrong one, and too many claims in a statement produces more discussion around minor sub-claims than major points (an example of a shallow criticism of EA discussion norms).
The decisions which caused the FTX catastrophe, the fact that EA is counterfactually responsible for the three primary AGI labs, Anthropic being entirely run by EAs yet still doing net negative work, and the funding of mostly capabilities oriented ML work with vague alignment justifications (and potentially similar dynamics in biotech which are more speculative for me right now), with the creation of GPT and[1] RLHF as particular examples of this.
I recently found out that GPT was not in fact developed for alignment work. I had gotten confused with some
Strong disagree for misattributing blame and eliding the question.
To the extent that "EA is counterfactually responsible for the three primary AGI labs," you would need to claim that the ex-ante expected value of specific decisions was negative, and that those decisions were because of EA, not that it went poorly ex-post. Perhaps you can make those arguments, but you aren't.
Ditto for "The decisions which caused the FTX catastrophe" - Whose decisions, where does the blame go, and to what extent are they about EA? SBF's decision to misappropriate funds, or fraudulently misrepresent what he did? CEA not knowing about it? OpenPhil not investigating? Goldman Sachs doing a bad job with due diligence?
EAs should read more deep critiques of EA, especially external ones
- For instance this blog and this forthcoming book
The blog post and book linked do not seem likely to me to discuss "deep" critiques of EA. In particular, I don't think the problem with the most harmful parts of EA are caused by racism or sexism or insufficient wokeism.
In general, I don't think many EAs, especially very new EAs with little context or knowledge about the community, are capable of recognizing "deep" from "shallow" criticisms, I also expect them to be overly optimistic a...
Eh, I don’t think this is a priors game. Quintin has lots of information, I have lots of information, so if we were both acting optimally according to differing priors, our opinions likely would have converged.
In general I’m skeptical of arguments of disagreement which reduce things to differing priors. It’s just not physically or predictively correct, and it feels nice because now you no longer have an epistemological duty to go and see why relevant people have differing opinions.
Totally agree with everything in here!
I also like the framing: Status-focused thinking was likely very highly selected for in the ancestral environment, and so when your brain comes up with status-focused justifications for various plans, you should be pretty skeptical about whether it is actually focusing on status as an instrumental goal toward your intrinsic goals, or as an intrinsic goal in itself. Similar to how you would be skeptical of your brain for coming up with justifications in favor of why its actually a really good idea to hire that really sexy girl/guy interviewing for a position who analyzed objectively is a doofus.
I think the current arms-length community interaction is good, but mostly because I'm scared EAs are going to do something crazy which destroys the movement, and that Lesswrongers will then be necessary to start another spinoff movement which fills the altruistic gap. If Lesswrong is too close to EA, then EA may take down Lesswrong with it.
Lesswrongers seem far less liable to play with metaphorical fire than EAs, given less funding, better epistemics, less overall agency, and fewer participants.
I disagree-voted.
I think pure open dialogue is often good for communities. You will find evidence for this if you look at most any social movement, the FTX fiasco, and immoral mazes.
Most long pieces of independent research that I see are made by open-phil, and I see far more EAs deferring to open-phil's opinion on a variety of subjects than Lesswrongers. Examples that come to mind from you would be helpful.
It was originally EAs who used such explicit expected value calculations during Givewell periods, and I don't think I've ever seen an EV calculation don...
I strong downvoted this because I don't like online discussions that devolve into labeling things as cringe or based. I usually replace such words with low/high status, and EA already has enough of that noise.
I like this, and think its healthy. I recommend talking to Quintin Pope for a smart person who has thought a lot about alignment, and came to the informed, inside-view conclusion that we have a 5% chance of doom (or just reading his posts or comments). He has updated me downwards on doom a lot.
Hopefully it gets you in a position where you're able to update more on evidence that I think is evidence, by getting you into a state where you have a better picture of what the best arguments against doom would be.
Is 5% low? 5% still strikes me as a "preventing this outcome should plausibly be civilization's #1 priority" level of risk.
I find myself disliking this comment, and I think its mostly because it sounds like you 1) agree with many of the blunders Rob points out, yet 2) don’t seem to have learned anything from your mistake here? I don’t think many do or should blame you, and I’m personally concerned about repeated similar blunders on your part costing EA much loss of outside reputation and internal trust.
Like, do you think that the issue was that you were responding in heat, and if so, will you make a future policy of not responding in heat in future similar situations?
I feel li...
It would not surprise me if most HR departments are set up as the result of lots of political pressures from various special interests within orgs, and that they are mostly useless at their “support” role.
With more confidence, I’d guess a smart person could think of a far better way to do support that looks nothing like an HR department.
I think MATS would be far better served by ignoring the HR frame, and just trying to rederive all the properties of what an org which does support well would look like. The above post looks like a good start, but it’d be a ...
Seems like that is just a bad argument, and can be solved with saying “well that’s obviously wrong for obvious, commonsense reasons” and if they really want to, they can make a spreadsheet, fill it in with the selection pressures they think they’re causing, and see for themselves that indeed its wrong.
The argument I’m making is that most of the examples you gave I thought “that’s a dumb argument”. And if people are consistently making transparently dumb selection arguments, this seems different from people making subtly dumb selection arguments, like econo...
I don’t buy any of the arguments you said at the top of the post, except for toxoplasma of rage (with lowish probability) and evaporative cooling. But both of these (to me) seem like a description of an aspect of a social dynamic, not the aspect. And currently not very decision relevant.
Like, obviously they’re false. But are they useful? I think so!
I’d be interested in different, more interesting or decision relevant or less obvious mistakes you often see.
I feel like you may be preaching to the choir here, but agree with the sentiment (modulo thinking people should do more of whatever is the best thing on the margin).
Nevermind, I see its a crosspost.
Overall, I think the Progress studies community seems decently aligned with what EAs care about, and could become more-so in the coming years. The event had decent epistemics and was less intimidating than an EA conference. I think many people who feel that EA is too intense, cares too much about longtermism, or uses too much jargon could find progress studies as a suitable alternative. If the movement known as EA dissolved (God forbid) I think progress studies could absorb many of the folks.
I'm curious about how you think this will develop. It seems li...
Hm. I think I mostly don’t think people are good at doing that kind of reasoning. Generally when I see it in the wild, it seems very naive.
I’d like to know if you, factoring in optics into your EV calcs, see any optics mistakes EA is currently making which haven’t already blown up, and that (say) Rob Bensinger probably can’t given he’s not directly factoring in optics to his EV calcs.
I think optics concerns are corrosive in the same way that PR concerns are. I quite like Rob Bensinger's perspective on this, as well as Anna's "PR" is corrosive, reputation is not.
I'd like to know what you think of these strategies. Notably, I think they defend against SBF, but not against Wytham Abbey type stuff, and conditional on Wytham Abbey being an object-level smart purchase, I think that's a good thing.
I wouldn’t advocate for engineering species to be sapient (in the sense of having valenced experiences), but for those that already are, it seems sad they don’t have higher ceilings for their mental capabilities. Like having many people condemned to never develop past toddlerhood.
edit: also, this is a long-term goal. Not something I think makes sense to make happen now.
I wish people would stop optimizing their titles for what they think would be engaging to click on. I usually downvote such posts once I realize what was done.
I ended up upvoting this one bc I think it makes an important point.
I interpreted “ eliminate natural ecosystems” as more like eliminating global poverty in the human analogy. Seems bad to do a mass killing of all animals, and better to just make their lives very good, and give them the ability to mentally develop past mental ages of 3-7.
If done immediately, this seems like it’d severely curtail humanity’s potential. But at some point in the future, this seems like a good idea.
You should make manifold markets predicting what you’ll think of these questions in a year or 5 years.
Didn't see the second part there.
If you would not trade $10 billion for 3 weeks that could be because:
- I'm more optimistic about empirical research / think the time iterating at the end when we have the systems is significantly more important than the time now when we can only try to reason about them.
- you think money will be much less useful than I expect it to be
I wouldn't trade $10 billion, but I think empirical research is good. It just seems like we can already afford a bunch of the stuff we want, and I expect we will continue to get lots of mon...
the amount of expected serial time a successful (let's say $10 billion dollar) AI startup is likely to counterfactually burn. In the post I claimed that this seems unlikely to be more than a few weeks. Would you agree with this?
No, see my comment above. Its the difference between a super duper AGI and only a super-human AGI, which could be years or months (but very very critical months!). Plus whatever you add to the hype, plus worlds where you somehow make $10 billion from this are also worlds where you've had an inordinate impact, which makes me more ...
I think it has a large chance of accelerating timelines by a small amount, and a small chance of accelerating timelines by a large amount. You can definitely increase capabilities, even if they're not doing research directly into increasing the size of our language models. Figuring out how you milk language models for all the capabilities they have, the limits of such milking, and making highly capable APIs easy for language models to use are all things which shorten timelines. You go from needing a super duper AGI to take over the world to a barely super-...
Also, from what I've heard, you cannot in fact use ungodly amounts of money to move talent. Generally, if top researchers were swayable that way, they'd be working in industry. Mostly, they just like working on their research, and don't care much about how much they're paid.
In general, it is a bad idea to trade increased probability that the world ends for money if your goal is to decrease probability that the world ends. People are usually bad at this kind of consequentialism, and this definitely strikes my 'galaxybrain take' detector.
And to the "but we'll do it safer than the others" or "we'll use our increased capabilities for alignment!" responses, I refer you to Nate's excellent post rebutting that line of thought.
I interpreted this post as the author saying that they thought general AI capabilities would be barely advanced by this kind of thing, if they were advanced by it at all. The author doesn't seem to suggest building an AGI startup, but rather some kind of AI application startup.
I'm curious if you think your reasoning applies to anything with a small chance of accelerating timelines by a small amount, or if you instead disagree with the object-level claim that such a startup would only have a small chance of accelerating timelines by a small amount.
Most suggestions I see for alternative community norms to the ones we currently have seem to throw out many of the upsides of the community norms they're trying to replace.
When trying to replace community norms, we should try to preserve the upsides of having the previous community norms.
I think Habryka has mentioned that Lightcone could withstand a defamation suit, so there’s not a high chance of financially ruining him. I am tentatively in agreement otherwise though.
True! But for the record I definitely don't have remotely enough personal wealth to cover such a suit. So if libel suits are permissible then you may only hear about credible accusations from people on teams who are willing to back the financial cost, the number of which in my estimation is currently close to 1.
Added: I don't mean to be more pessimistic than is accurate. I am genuinely uncertain to what extent people will have my back if a lawsuit comes up (Manifold has it at 13%), and my uncertainty range does include "actually quite a lot of people are w... (read more)