e.g. blocking oil depots seems to have comparable effects to throwing soup) although the analysis here is still to be finalised.
Interested to hear more, but I would not expect blocking oil depots to be effective either. Why would it? It may be related but its not so compelling to the average observer. Compare with the example I used, of sit-ins, which are eminently compelling. If you compare ineffective strategies with ineffective strategies you will pick up noise and low order effects.
...Specifically, I think there are some random factors around luck, p
I think this post is a bit too humble. The social movements that worked had reasons they worked. The structure of the problem, the allies they were likely to find, and the enemies they were likely to have resulted in the particular strategies they chose working. Similarly for the social movements which failed. These are reasons you can & should learn from, and your ability to look at those reasons is the largest order effect here.
Most movements don’t, they do what you describe, choose their favorite movement, and cargo-cult their way to failure.
The mos...
The social movements that worked had reasons they worked. The structure of the problem, the allies they were likely to find, and the enemies they were likely to have resulted in the particular strategies they chose working. Similarly for the social movements which failed. These are reasons you can & should learn from, and your ability to look at those reasons is the largest order effect here.
I take the point about being too humble but I'm not sure I fully agree with this bit above! Specifically, I think there are some random factors around luck, person...
I will note that my comment made no reference to who is “more altruistic”. I don’t know what that term means personally, and I’d rather not get into a semantics argument.
If you give the definition you have in mind, then we can argue over whether its smart to advocate that someone ought to be more altruistic in various situations, and whether it gets at intuitive notions of credit assignment.
I will also note that given the situation, its not clear to me Anna’s proper counterfactual here isn’t making $1M and getting nice marketable skills, since she and Beli...
I think the right stance here is a question of “should EA be praising such people or get annoyed they’re not giving up more if it wants to keep a sufficient filter for who it calls true believers”, and the answer there is obviously both groups are great & true believers and it seems dumb to get annoyed at either.
The 10% number was notably chosen for these practical reasons (there is nothing magic about that number), and to back-justify that decision with bad moral philosophy about “discharge of moral duty” is absurd.
I'm not going to defend my whole view here, but I want to give a though experiment as to why I don't think that "shadow donations"—the delta between what you could earn if you were income-maximizing, and what you're actually earning in your direct work job—are a great measure for the purposes of practical philosophy (though I agree they're both a relevant consideration and a genuine sacrifice).
Imagine two twins, Anna and Belinda. Both have just graduated with identical grades, skills, degrees, etc. Anna goes directly from college work on AI safety at Safet...
It's not clear to me whether you're talking about people who (a) do a voluntary salary sacrifice while working at an EA org, or (b) people who could have earned much more in industry but moved to a nonprofit so now earn much less than their hypothetical maximum earning potential.
In case (a), yes, their salary sacrifice should count towards their real donations.
But I think a practical moral philosophy wherein donation expectations are based on your actual material resources (and constraints), not your theoretical maximum earning potential, seems more justif...
There's already been much critique of your argument here, but I will just say that by the "level of influence" metric, Daniela shoots it out of the park compared to Donald Trump. I think it is entirely uncontroversial and perhaps an understatement to claim the world as a whole and EA in particular has a right to know & discuss pretty much every fact about the personal, professional, social, and philosophical lives of the group of people who, by their own admission, are literally creating God. And are likely to be elevated to a permanent place of power ...
When you start talking about silicon valley in particular, you start getting confounders like AI, which has a high chance of killing everyone. But if we condition on that going well or assume the relevant people won't be working on that, then yes that does seem like a useful activity, though note that silicon valley activities are not very neglected, and you can certainly do better than them by pushing EA money (not necessarily people[1]) into the research areas which are more prone to market failures or are otherwise too "weird" for others to believe in.
O...
This seems pretty unlikely to me tbh, people are just less productive in the developing world than the developed world, and its much easier to do stuff--including do good--when you have functioning institutions, surrounded by competent people, connections & support structures, etc etc.
That's not to say sending people to the developed world is bad. Note that you can get lots of the benefits of living in a developed country by simply having the right to live in a developed country, or having your support structure or legal system or credentials based in ...
I think it seems pretty evil & infantilizing to force people to stay in their home country because you think they’ll do more good there. The most you should do is argue they’ll do more good in their home country than a western country, then leave it up to them to decide.
I will furthermore claim that if you find yourself disagreeing, you should live in the lowest quality of living country you can find, since clearly that is the best place to work in your own view.
Maybe I have more faith in the market here than you do, but I do think that technical &...
This is not a discussion about anyone forcing anyone to do anything (noone has suggested that), but the original question was about the degree we should potentially fund and support the best workers in our orgs to emigrate. This is a hugely important question, because from experience in Uganda with enough time and resources I could probably help almost any high level qualified and capable person to emigrate but is that really the best thing for me do?
As things stand every country in the world has huge restrictions on emigration, which does often "force" pe...
I do think this is correct to an extent, but also that much moral progress has been made by reflecting on our moral inconsistencies, and smoothing them out. I at least value fairness, which is a complicated concept, but also is actively repulsed by the idea that those closer to me should weigh more in society's moral calculations. Other values I have, like family, convenience, selfish hedonism, friendship, etc are at odds with this fairness value in many circumstances.
But I think its still useful to connect the drowning child argument with the parts of me ...
Otherwise I think that you are in part spending 80k's reputation in endorsing these organizations
Agree on this. For a long time I've had a very low opinion of 80k's epistemics[1] (both podcast, and website), and having orgs like OpenAI and Meta on there was a big contributing factor[2].
In particular that they try to both present as an authoritative source on strategic matters concerning job selection, while not doing the necessary homework to actually claim such status & using articles (and parts of articles) that empirically nobody reads &
The second two points don’t seem obviously correct to me.
First, the US already has a significant amount of food security, so its unclear whether cultivated meats would actually add much.
Second, If cultivated meats destroy the animal agriculture industry, this could very easily lead to a net loss of jobs in the economy.
rationalist community kind of leans right wing on average
Seems false. It leans right compared to the extreme left wing, but right compared to the general population? No. Its too libertarian for that. I bet rightists would also say it leans left, and centrists would say its too extreme. Overall, I think its just classically libertarian.
There's much thought in finance about this. Some general books are:
And more particularly, The Black Swan: The Impact of the Highly Improbable, along with other stuff by Taleb (this is kind-of his whole thing).
The same standards applied to anything else: A decent track record of such experiments succeeding, and/or well-supported argument based on (in this case) sound economics.
So far the track-record is heavily against. Indeed, many of the worst calamities in history took the form of "revolution".
In lieu of that track record, you need one hell of an argument to explain why your plan is better, which at the minimum likely requires basing it on sound economics (which, if you want particular pointers, mostly means Chicago school, but sufficiently good complexity economics would also be fine).
It makes me sad that I automatically double timing estimates from EA orgs, treat that as the absolute minimum time something could take, and am often still disappointed.
I definitely strongly agree with this. I do think its slowly, ever so slowly getting better though.
More broadly I think Anthropic, like many, hasn’t come to final views on these topics and is working on developing views, probably with more information and talent than most alternatives by virtue of being a well-funded company.
It would be remiss to not also mention the large conflict of interest analysts at Anthropic have when developing these views.
I do dislike this feature of EA, but I don't think the solution is to transition away from a one-grant-at-a-time model. Probably better would be to have exit-coaches to help EAs find a new career outside EA, if they built up a bunch of skills because funding sources or other generally EA-endorsed sources told them they would give them money if they used such skills for the benefit of the universe.
What talents do you think aren't applicable outside the EAsphere?
(Edit: I do also note that I believe 80k should be taken a lot less seriously than they present themselves, and that most EAs take them. Their incorrect claims of EA being talent constrained one of many reasons I distrust them)
I'm not sure what the solution is - more experimentation seems generally like a good idea, but EA fundmakers seem quite conservative in the way they operate, at least once they've locked in a modus operandi.
For what it's worth, my instinct is to try a model with more 'grantmakers' who take a more active, product-managery/ownery role, where they make fewer grants, but the grants are more like contracts of employment, such that the grantmakers take some responsibility for the ultimate output (and can terminate a contract like a normal employer if the '...
Recommendation: A collection of paradoxes dealing with Utilitarianism. This seems to me to be what you wrote, and would have had me come to the post with more of a “ooo! Fun philosophy discussion” rather than “well, thats a very strong claim… oh look at that, all so called inconsistencies and irrationalities either deal with weird infinite ethics stuff or are things I can’t understand. Time to be annoyed about how the headline is poorly argued for.” the latter experience is not useful or fun, the former nice depending on the day & company.
My understanding of history says that usually letting militaries have such power, or initiating violent overthrow via any other means to launch an internal rebellion leads to bad results. Examples include the French, Russian, and English revolutions. Counterexamples possibly include the American Revolution, though notably I struggle to point to anything concrete that would have been different about the world had America had a peaceful break off like Canada later on did.
Do you know of counter-examples, maybe relating to poor developing nations which after the rebellion became rich developed nations?
True! But for the record I definitely don't have remotely enough personal wealth to cover such a suit. So if libel suits are permissible then you may only hear about credible accusations from people on teams who are willing to back the financial cost, the number of which in my estimation is currently close to 1.
Added: I don't mean to be more pessimistic than is accurate. I am genuinely uncertain to what extent people will have my back if a lawsuit comes up (Manifold has it at 13%), and my uncertainty range does include "actually quite a lot of people are w...
This seems false. Dramatic increases in life extension technology have been happening ever since the invention of modern medicine, so its strange to say the field is speculative enough not to even consider.
I agree with your conclusion but disagree about your reasoning. I think its perfectly fine and should be encouraged to make advances in conceptual clarification which confuse people. Clarifying concepts can often result in people being confused about stuff they weren’t previously, and this often indicates progress.
My response would be a worse version of Marius’s response. So just read what he said here for my thoughts on hits-based approaches for research.
I disagree, and wish you’d actually explain your position here instead of being vague & menacing. As I’ve said in my previous comment
...I will add to my note on (2): In most news articles in which I see Connor or Conjecture mentioned, I feel glad he talked to the relevant reporter, and think he/Conjecture made that article better. It is quite an achievement in my book to have sane conversations with repor
(cross-posted to LessWrong)
I agree with Conjecture's reply that this reads more like a hitpiece than an even-handed evaluation.
I don't think your recommendations follow from your observations, and such strong claims surely don't follow from the actual evidence you provide. I feel like your criticisms can be summarized as the following:
Conjecture was publishing unfinished research directions for a while.
Conjecture does not publicly share details of their current CoEm research direction, and that research direction seems hard.
Conjecture told the gov
My impression is that immigration policy is unusually difficult to effect given how much of a hot-button issue it is in the US (ironic, given your forum handle). So while the scale may be large, I’m skeptical of the tractability.
On OpenPhil’s behavior, yeah, if they’re making it much easier for AI labs to hire talent abroad, then they’re doing a mistake, but that path from all-cause increases in high skill immigration to AI capabilities increases has enough noise that the effects here may be diffuse enough to ignore. There’s also the case that AI safety be...
Public sentiment is already mostly against AI when public sentiment has an opinion. Though its not a major political issue (yet) so people may not be thinking about it. If it turns into a major political issue (there are ways of regulating AI without turning it into a major political issue, and you probably want to do so), then it will probably become 50/50 due to what politics does to everything.
You can argue that the theorems are wrong, or that the explicit assumptions of the theorems don't hold, which many people have done, but like, there are still coherence theorems, and IMO completeness seems quite reasonable to me and the argument here seems very weak (and I would urge the author to create an actual concrete situation that doesn't seem very dumb in which a highly intelligence, powerful and economically useful system has non-complete preferences).
If you want to see an example of this, I suggest John's post here.
Working on it.
Spoiler (don't read if you want to work on a fun puzzle or test your alignment metal).
I would recommend, when making a startup, you have a clear idea of what your startup would actually do, which takes into account your own & your company’s strengths & weaknesses & comparative advantage. Many want to make money, those who succeed usually have some understanding of how (even if later they end up radically pivoting to something else).
I know for one that computer system security and consensus mechanisms for crypto rely on proofs and theorems to guide them. It is a common when you want a highly secure computer system to provably verify its security, and consensus mechanisms rely much on mechanism design. Similarly for counter-intelligence: cryptography is invaluable in this area.
Mass Gell-Mann amnesia effect because, say, I may look at others talking about my work or work I know closely, and say "wow! That's wrong", but look at others talking about work I don't know closely and say "wow! That implies DOOM!" (like dreadfully wrong corruptions of the orthogonality thesis), and so decide to work on work that seems relevant to that DOOM?
Basically, there are simple arguments around 'they are an AGI capabilities organization, so obviously they're bad', and more complicated arguments around 'but they say they want to do alignment work', and then even more complicated arguments on those arguments going 'well, actually it doesn't seem like their alignment work is all that good actually, and their capabilities work is pushing capabilities, and still makes it difficult for AGI companies to coordinate to not build AGI, so in fact the simple arguments were correct'. Getting more into depth would require a writeup of my current picture of alignment, which I am writing, but which is difficult to convey via a quick comment.
The decisions which caused the FTX catastrophe, the fact that EA is counterfactually responsible for the three primary AGI labs, Anthropic being entirely run by EAs yet still doing net negative work, and the funding of mostly capabilities oriented ML work with vague alignment justifications (and potentially similar dynamics in biotech which are more speculative for me right now), with the creation of GPT and[1] RLHF as particular examples of this.
I recently found out that GPT was not in fact developed for alignment work. I had gotten confused with some
Strong disagree for misattributing blame and eliding the question.
To the extent that "EA is counterfactually responsible for the three primary AGI labs," you would need to claim that the ex-ante expected value of specific decisions was negative, and that those decisions were because of EA, not that it went poorly ex-post. Perhaps you can make those arguments, but you aren't.
Ditto for "The decisions which caused the FTX catastrophe" - Whose decisions, where does the blame go, and to what extent are they about EA? SBF's decision to misappropriate funds, or fraudulently misrepresent what he did? CEA not knowing about it? OpenPhil not investigating? Goldman Sachs doing a bad job with due diligence?
EAs should read more deep critiques of EA, especially external ones
- For instance this blog and this forthcoming book
The blog post and book linked do not seem likely to me to discuss "deep" critiques of EA. In particular, I don't think the problem with the most harmful parts of EA are caused by racism or sexism or insufficient wokeism.
In general, I don't think many EAs, especially very new EAs with little context or knowledge about the community, are capable of recognizing "deep" from "shallow" criticisms, I also expect them to be overly optimistic a...
Eh, I don’t think this is a priors game. Quintin has lots of information, I have lots of information, so if we were both acting optimally according to differing priors, our opinions likely would have converged.
In general I’m skeptical of arguments of disagreement which reduce things to differing priors. It’s just not physically or predictively correct, and it feels nice because now you no longer have an epistemological duty to go and see why relevant people have differing opinions.
I want to lower frictions to criticism as much as possible, because I think criticism is very good.
The main argument against I’ve seen is that an org won’t be able to meaningfully respond due to the pace things move on the forum. This sounds like a UI issue. No need to create a harmful community norm.