[From the LW version of this post]
Me:
This comment appears transparently intended to increase the costs associated with having written this post, and to be a continuation of the same strategy of attempting to suppress true information.
Martín Soto:
...This post literally strongly misrepresents my position in three important ways¹. And these points were purposefully made central in my answers to the author, who kindly asked for my clarifications but then didn't include them in her summary and interpretation. This can be checked by contrasting her summary of my po
So, Nonlinear-affiliated people are here in the comments disagreeing, promising proof that important claims in the post are false. I fully expect that Nonlinear's response, and much of the discussion, will be predictably shoved down the throat of my attention, so I'm not too worried about missing the rebuttals, if rebuttals are in fact coming.
But there's a hard-won lesson I've learned by digging into conflicts like this one, which I want to highlight, which I think makes this post valuable even if some of the stories turn out to be importantly false:
If a s...
It sounds like you're claiming something like "all information is valuable information, because even if the information is false you've learned something (e.g. that the source is untrustworthy)". I think this is too strong of a claim. Trying to figure out what's true amidst lots of falsehoods is very difficult and takes time. Most people in real life aren't playing perfect Werewolf with a complex Bayesian model that encompasses all hypotheses. Quite the opposite, from what I've seen both in myself and others, our natural tendency is to quickly collapse on ...
The existence of these meta-analyses is much less convincing than you think. One, because a study of the effect of sodium reduction on blood sugar combined with a study of the effect of antihypertensive medications don't combine to make a valid estimate of the effect of sodium reduction on a mostly-normotensive population.
But second, because the meta-analyses are themselves mixed. A 2016 meta-meta-analysis of supposedly systematic meta-analyses of sodium reduction found 5 in favor, 3 against, and 6 inconclusive, and found evidence of biased selective citation.
I strongly disagree with the claim that sodium reduction does more good than harm; I think interventions to reduce sodium intake directly harm the people affected. This is true everywhere, but especially true in poorer countries with hot climates, where sodium-reduction programs have the greatest potential for harm.
(This is directly contrary to the position of the scientific establishment. I am well aware of this.)
The problem is that sodium is a necessary nutrient, but required intake varies significantly between people and between temperatures, because sw...
People hate being taxed for doing things they like
It's much worse than that; in hotter climates, salt isn't a luxury, it's basic sustenance. Gandhi wasn't being figurative when he said "Next to air and water, salt is perhaps the greatest necessity of life."
My understanding is that they strongly prefer you do it between 5 and 7 pm in your local timezone, so that responding officers nominally working a 9-5 schedule can collect overtime payments.
It samples unread posts from a curated list, then when that list is empty samples weighted based on karma. Unfortunately if you read posts logged out, or on a previous version of the site, then old posts won't be marked-as-read so they'll come up again.
I didn't make that claim in the grandparent comment, and I don't know of any specific other deceptive statements in it. But, on consideration... yeah, there probably are. Most of the post is about internal details of FHI operations which I know little about and have no easy way to verify. The claim about the Apology is different in that it's easy to check; it seems reasonable to expect that if the most-verifiable part contains an overreach, then the less-verifiable parts probably do too.
In my experience, there's a pattern, in social attacks like this, where critics are persistently, consistently unwilling to restrain themselves to only making criticisms that are true, regardless of whether the true criticisms would have been enough. This is a big deal and should not be tolerated.
reducing existential risk by .00001 percent to protect 1018 future humans
Very-small-probability of very-large-impact is a straw man. People who think AGI risk is an important cause area think that because they also think that the probability is large.
I roll to disbelieve on these numbers. "Multiple reports a week" would be >100/year, which from my perspective doesn't seem consistent with the combination of (1) the total number of reports I'm aware of being a lot smaller than that, and (2) the fact that I can match most of the cases in the Time article (including ones that had names removed) to reports I already knew about.
(It's certainly possible that there was a particularly bad week or two, or that you're getting filled in on some sort of backlog.)
I also don't believe that a law school, or any gro...
They aren't currently shown separately anywhere. I added it to the ForumMagnum feature-ideas repo but not sure whether we'll wind up doing it.
As a datum from the LessWrong side as a moderator, when the crossposting was first implemented, initially there were a bunch of crossposts that weren't doing well (from a karma perspective) and seemed to be making the site worse. To address this, we added a requirement that to crosspost from EAF to LW, you need 100 karma on LW. I believe the karma requirement is symmetrical: in order to crosspost an LW post onto EAF, you need 100 EAF karma
The theory being, a bit of karma shows that you probably have some familiarity with the crosspost-destination site cult...
Suppose there's a spot in a sentence where either of two synonyms would be effectively the same. That's 1 bit of available entropy. Then a spot where either a period or a comma would both work; that's another bit of entropy. If you compose a message and annotate it with 48 two-way branches like this, using a notation like spintax, then you can programmatically create 2^48 effectively-identical messages. Then if you check the hash of each, you have good odds of finding one which matches the 48-bit hash fragment.
(Fyi a hash of only 12 hex digits (48 bits) is not long enough to prevent retroactively composing a message that matches the hash-fragment, if the message is long enough that you can find 48 bits of irrelevant entropy in it.)
One of the defining characteristics of EA is rejecting certain specific reasons for counting people unequally; in particular, under EA ideology, helping someone in a distant country is just as good as helping a nearby person by the same amout. Combined with the empirical fact that a dollar has much larger effect when spent on carefully chosen interventions in poorer countries, this leads to EA emphazing on poverty-reduction programs in poor, mainly African countries, in contrast to non-EA philanthropy which tends to favor donations local to wherever ...
Yeah, no, this story is not overall plausible and I would bet at better than 50-50 odds that there's a major misrepresentation here regarding what happened. Option 1 is that a grant was approved pending due diligence, then pulled during the due diligence process. That would be mildly embarrassing, and would probably imply a grant evaluator somewhere didn't do their job, but it wouldn't be the scandal that this purports to be. Option 2 is that the letter of intent is an outright forgery.
As a Swede who is somewhat familiar with the publication Expo, I would maybe put the risk of forgery of that document at <5%. They are specifically known for their digging journalism, and I would be very surprised if they screwed up something basic like that.
Also, wouldn't it be extremely strange behavior from FLI if that document actually was a forgery? Would be the go-to defense rather than what they are doing now.
Lots of the comments here are pointing at details of the markets and whether it's possible to profit off of knowing that transformative AI is coming. Which is all fine and good, but I think there's a simple way to look at it that's very illuminating.
The stock market is good at predicting company success because there are a lot of people trading in it who think hard about which companies will succeed, doing things like writing documents about those companies' target markets, products, and leadership. Traders who do a good job at this sort of ana...
The claim in the post (which I think is very good) is that we should have a pretty strong prior against anything which requires positing massive market inefficiency on any randomly selected proposition where there is lots of money money on the table. This suggests that you should update away from very short timelines. There's no assumption that markets are a "mystical source of information" just that if you bet against them you almost always lose.
There's also a nice "put your money where you mouth is" takeaway from the post, which AFAIK few short timelines people are doing.
It doesn't seem all that relevant to me whether traders have a probability like that in their heads. Whether they have a low probability or are not thinking about it, they're approximately leaving money on the table in a short-timelines world, which should be surprising. People have a large incentive to hunt for important probabilities they're ignoring.
Of course, there are examples (cf. behavioral economics) of systemic biases in markets. But even within behavioral economics, it's fairly commonly known that it's hard to find ongoing, large-scale biases in financial markets.
I think a fair number of market participants may have something like a probability estimate for transformative AI within five years and maybe even ten. (For example back when SoftBank was throwing money at everything that looked like a tech company, they justified it with a thesis something like "transformative AI is coming soon", and this would drive some other market participants to think about the truth of that thesis and its implications even if they wouldn't otherwise.) But I think you are right that basically no market participants have a probability...
I find it hard to believe that the number of traders who have considered crazy future AI scenarios is negligible. New AI models, semiconductor supply chains, etc. have gotten lots of media and intellectual attention recently. Arguments about transformative AGI are public. Many people have incentives to look into them and think about their implications.
I don't think this post is decisive evidence against short timelines. But neither do I think it's a "trap" that relies on fully swallowing EMH. I think there're deeper issues to unpack here about why much of the world doesn't seem to put much weight on AGI coming any time soon.
Definitely agree with this. Consider for instance how markets seemed to have reacted strangely / too slowly to the emergence of the Covid-19 pandemic, and then consider how much more familiar and predictable is the idea of a viral pandemic compared to the idea of unaligned AI:
...The coronavirus was x-risk on easy mode: a risk (global influenza pandemic) warned of for many decades in advance, in highly specific detail, by respected & high-status people like Bill Gates, which was easy to understand with well-known historical precedents, fitting into s
Narrow the thought experiment to "cancer that banks aren't able to find out about" and the thought experiment goes through fine. And US institutions are strongly supportive of secrecy, in general, so I think this is actually the typical case (at least for people who are young enough that seeking a large loan is not itself suspicious).
That does not get the thought experiment through.
Mortgage rates for older people are higher. And if mortgage holders die, the mortgage must still be paid by the executor of an estate, which is a disincentive for anyone with a bequest motive.
I'm sure that we can find some corner case where young cancer victims with no friends/family or no regard for their friends/family act otherwise. But this hardly seems important for the point that you -- yes, you -- can make money by implementing the trades suggested in this piece. Which is the claim that Yudkowsky is using the cancer victim analogy to argue against.
I don't have an inside view on specific homeless charities in SF, but I do have an outside-view impression that the amount of money going in, contrasted with the results being achieved, implies that something is more-wrong-than-usual. That is, I think money is probably not just being spent inefficiently, but outright embezzled. It should be possible to identify individual charities to donate to doing good work, but EA's usual charity-oversight methodology is typically aimed at catching dumb ideas, not at catching outright fraud.
If a charity asks a donor for financial audits, the response is going to be: We don't do paperwork for you. You do paperwork for us. Followed by giving the donation to someone else.
I think the EA community should have an organization full of private investigators and forensic accountants tucked away somewhere, with a broad mandate to look for problems focused on EA-connected places.
But FTX is a for-profit corporation. The fact that they're making donations doesn't give anyone the power to impose accounting standards on them; that capability is only held by government institutions with power of subpoena.
I heard the same claim, from a different source: that SBF did something unethical at Alameda Research prior to founding FTX, that some EAs had left Alameda saying that SBF was unethical and no one should work with him, and that there were privately circulated warnings to this effect. (The person I heard this from hasn't spoken publicly about it yet as far as I know. They are someone with no previous or current involvement with FTX or Alameda Research, who I think is reporting honestly and is well positioned to have heard such things.)
(EDIT: others along the rumor-path via which I heard this have now spoken on this thread, in greater detail than I have; so this comment is a duplicate report and should not be coutned.)
The story of how it got that way is that agree/disagree was originally built as an experiment-with-voting-systems feature, with the key component of that being that different posts can have different voting systems without conflict. (See eg this thread for another voting system we tried.)
The main reason for hesitation (other ForumMagnum developers might not agree) is that I'm not really convinced that 2-axis voting is the right voting system, and expanding it from a posts-have-different-voting-systems context to a whole-site-is-2-axis context limits the op...
Phil Torres is not currently a deadname. A deadname is a name that someone is no longer using in their public persona, but the name Phil is displayed prominently on their web page. Searching Amazon for Phil Torres finds their books, searching Amazon for Emile Torres does not.
Moreover, it's basically impossible to understand what's going on here without knowing that Phil and Emile are the same person, and asking the original poster to avoid mentioning the name-mapping is asking them to obfuscate.
Phil/Émile changed name, but did not change pronouns. A Facebook post I saw indicated that the name change was to avoid confusion with a different Phil Torres, who is an entomologist. While their Twitter profile specifies they/them pronouns, their Facebook profile says he/him (both profiles have the updated name). I think under any reasonable etiquette standard, that means either pronoun is acceptable unless they directly say otherwise.
If your group hasn't done the Petrov Day ritual, this is a good place to start. (There are several variants to choose from, and it's a living tradition, so making your own variant is encouraged, though obviously not required.)
22. If successful, in five years the impact of our project will be...
Eighty percent of California water utilities will be implementing leading water efficiency programmatic practices to bring down the water consumption of urban areas and we are able to implement these analytics in any area of the world faced with an aridifying climate.
I was under the impression that California's water problems are almost entirely agricultural, meaning that improving urban-area water use in particular won't help because that's not where the water is going. I'm not ent...
If there's an arms race dynamic, it's probably a disaster no matter who wins. Having room to delay for late-stage alignment experiments is the barest minimum requirement in order for humanity to have any chance of survival. So the best case is to not have an arms race at all. The next-best thing is for the organization that wins to be the sort of organization that could stop at the brink for late-stage alignment research, if its leader decided to, and for it to have a stable leader who's sane enough to make that decision. Then maximize the size of the gap ...
I don't think biodiversity is good, in fact it's probably bad. If we replaced natural ecosystems with curated ones, we'd have much less of a problem with zoonotic transmissions creating new diseases and pests, probably better nature aesthetics, and maybe some ability to use ecosystems to remove pollutants that are currently hard to get rid of.
It's important to remember that most of the US population was exposed to a significant amount of environmentalist propaganda as children, before they were able to think critically, and that there were falsehoods embedded in that propaganda. Ecosystems do not spontaneously turn into wastelands when they're perturbed; they mostly turn into boring forests and things like that.
Too abstract. Second-order effects are mostly not mysterious, they're things which you can predict, not perfectly but usually well enough, if you look at the right parts of the world and apply some economics. If someone's arguing against an intervention because they think the intervention will have bad second-order effects, then the followup question is whether those effects are real and how big they are. Answering that means looking at the details.
That said, in my experience, if you come across an argument between two people, and one person is saying Something Must Be Done, and the other person is saying You Fool That Will Backfire For Reasons I Will Explain, the second person is almost always right.
I think this is a decent idea given a small reframe. Rather than thinking of it as earmarking the cash for a specific purpose, treating it like an unenforced restriction, instead think of the cash transfers as having an opportunity to provide information attached, and try to provide good information. Ie, instead of "this cash transfer is for X", say "this cash transfer comes with a small pamphlet with several purchase ideas X,Y,Z". This framing is more cooperative, and fails more gracefully if the recommendations are bad.
Conventional wisdom in the business world is that brick-and-mortar retail (and brick-and-mortar books in particular) is a declining business, because it can't compete effectively with online stores. So I'm really skeptical of whether this business is financial viable to survive without continuous infusions of external cash, let alone with enough slack to do things that aren't profit motivated.
What that means practice is you haven't actually pinned the cost down to the right order of magnitude. Neither of the business sales you mentioned is compar...
How does XR weigh costs and benefits?
Does XR consider tech progress default-good or default-bad?
The core concept here is differential intellectual progress. Tech progress can be bad if it reorders the sequence of technological developments to be worse, by making a hazard precede its mitigation. In practice, that applies mainly to gain-of-function research and to some, but not all, AI/ML research. There are lots of outstanding disagreements between rationalists about which AI/ML research is good vs bad, which, when zoomed in on, reveal disagreements about A...
I think the common factor, among forms of advice that people are hesitant to give, is that they involve some risk. So if, for example, I recommend a supplement and it causes a health problem, or I recommend a stock and it crashes, there's some worry about blame. If the supplement helps, or the stock rises, there's some possibility of getting credit; but, in typical social relationships, the risk of blame is a larger concern than the possibility of credit, which makes people more than optimally hesitant.
I was somewhat confused by the scale using Categorizing Variants of Goodhart's Law as an example of a 100mQ paper, given that the LW post version of that paper won the 2018 AI Alignment Prize ($5k), which makes a pretty strong case for it being "a particularly valuable paper" (1Q, the next category up). I also think this scale significantly overvalues research agendas and popular books relative to papers. I don't think these aspects of the rubric wound up impacting the specific estimates made here, though.
- From people I know that have gotten vaccines in the Bay, it sounds like appointments have been booked quickly after being posted / there aren’t a bunch of openings.
This was true in February, but I think it's no longer true, due to a combination of the Johnson & Johnson vaccine being added and the currently-eligible groups being mostly done. Berkeley Public Health sent me this link which shows hundreds of available appointment slots over the next days at a dozen different Bay Area locations.
(EDIT: See below, the map I linked to may be mixing vacci...
The core thesis here seems to be:
I claim that [cluster of organizations] have collectively decided that they do not need to participate in tight feedback loops with reality in order to have a huge, positive impact.
There are different ways of unpacking this, so before I respond I want to disambiguate them. Here are four different unpackings:
Thank you for this thoughtful reply! I appreciate it, and the disambiguation is helpful. (I would personally like to do as much thinking-in-public about this stuff as seems feasible.)
I mean a combination of (1) and (4).
I used to not believe that (4) was a thing, but then I started to notice (usually unconscious) patterns of (4) behavior arising in me, and as I investigated further I kept noticing more & more (4) behavior in me, so now I think it's really a thing (because I don't believe that I'm an outlier in this regard).
...(4) is the interes
There's no simple yes or no answer. A) Competence is multi-dimensional, and B) there are some types of competencies that would make me discourage someone from running for office and doing other things instead.
There are also several other factors besides competency that go into whether someone is a good fit for running for office, among them things like personal history, location, temperament, and the badness of whomever is currently occupying the office in question.
I think some EAs should pursue local political office, and who those EAs are should be deter...
Looking at ads and introducing ads into your environment is not free, it's mildly harmful. If you offered me 1 cent per ad to display ads in my browser, I would refuse. The money going to charity doesn't change that.
LessWrong has a sidebar which makes the link to All Posts much more prominent; it looks like EA Forum hasn't adopted that yet, but it would probably help.
Were you under the impression that I was disagreeing with the sodium-reduction guidelines because I was merely unaware that they existed? This is an area of considerable controversy.
Quitting smoking, alcohol, salt, and sugar is also hard–they are quite addictive.
For most people, cutting salt intake is harmful, not helpful. Salt isn't new to human diets, and it isn't a matter of addiction; it's just a necessary nutrient.
Sugar can be harmful, but only insofar as it crowds out other calorie sources which are better. When people try to cut sugar, they often fail (and mildly harm themselves) because they neglect to replace it.
Post-mortem donation is fine, but being asked to sign up for kidney donation would be severely trust-destroying for me.
This happens to posts by accounts which have never posted before; established accounts (at least one post or comment) don't have to wait. This was instituted on both LW and EA Forum because of a steady stream of bot-generated spam.
That doesn't seem especially relevant to the question of whether first-world consumers should buy farmed or wild-caught fish; the amount caught form fisheries is set by regulations, not by demand, so consumer demand does not, on the margin, increase or decrease overfishing.
I doubt this makes a difference. Most of the market treats farmed and wild-caught fish as close substitutes, the supply of wild-caught fish is inelastic, and the supply of farmed fish is highly elastic. So if you switch from farmed to wild-caught fish, you are probably affecting market prices in a way which causes one other person to make the opposite change.
There are three additional premises required here. The first is that your own use of funds from investments must be significantly better than that of of other shareholders of the companies you invest in. The second is that the growth rate of the companies you invest in must exceed the rate at which the marginal cost of doing good increases, due to low-hanging fruit getting picked and due to lost opportunities for compounding. The third is that the growth potential of AI companies isn't already priced in, in a way that reduces your expected returns to be no better than index funds.
The first of these premises is probably true. The second is probably false. The third is definitely false.
Interesting. I think I can tell an intuitive story for why this would be the case, but I'm unsure whether that intuitive story would predict all the details of which models recognize and prefer which other models.
As an intuition pump, consider asking an LLM a subjective multiple-choice question, then taking that answer and asking a second LLM to evaluate it. The evaluation task implicitly asks the the evaluator to answer the same question, then cross-check the results. If the two LLMs are instances of the same model, their answers will be more strongly cor... (read more)