All of jimrandomh's Comments + Replies

Interesting. I think I can tell an intuitive story for why this would be the case, but I'm unsure whether that intuitive story would predict all the details of which models recognize and prefer which other models.

As an intuition pump, consider asking an LLM a subjective multiple-choice question, then taking that answer and asking a second LLM to evaluate it. The evaluation task implicitly asks the the evaluator to answer the same question, then cross-check the results. If the two LLMs are instances of the same model, their answers will be more strongly cor... (read more)

[From the LW version of this post]

Me:

This comment appears transparently intended to increase the costs associated with having written this post, and to be a continuation of the same strategy of attempting to suppress true information.

Martín Soto:

This post literally strongly misrepresents my position in three important ways¹. And these points were purposefully made central in my answers to the author, who kindly asked for my clarifications but then didn't include them in her summary and interpretation. This can be checked by contrasting her summary of my po

... (read more)
jimrandomh
8mo63
26
11
3
10

So, Nonlinear-affiliated people are here in the comments disagreeing, promising proof that important claims in the post are false. I fully expect that Nonlinear's response, and much of the discussion, will be predictably shoved down the throat of my attention, so I'm not too worried about missing the rebuttals, if rebuttals are in fact coming.

But there's a hard-won lesson I've learned by digging into conflicts like this one, which I want to highlight, which I think makes this post valuable even if some of the stories turn out to be importantly false:

If a s... (read more)

5
Linch
7mo
Repeating myself from when I first saw this comment:

It sounds like you're claiming something like "all information is valuable information, because even if the information is false you've learned something (e.g. that the source is untrustworthy)". I think this is too strong of a claim. Trying to figure out what's true amidst lots of falsehoods is very difficult and takes time. Most people in real life aren't playing perfect Werewolf with a complex Bayesian model that encompasses all hypotheses. Quite the opposite, from what I've seen both in myself and others, our natural tendency is to quickly collapse on ... (read more)

6
NunoSempere
8mo
Great point.

The existence of these meta-analyses is much less convincing than you think. One, because a study of the effect of sodium reduction on blood sugar combined with a study of the effect of antihypertensive medications don't combine to make a valid estimate of the effect of sodium reduction on a mostly-normotensive population.

But second, because the meta-analyses are themselves mixed. A 2016 meta-meta-analysis of supposedly systematic meta-analyses of sodium reduction found 5 in favor, 3 against, and 6 inconclusive, and found evidence of biased selective citation.

I strongly disagree with the claim that sodium reduction does more good than harm; I think interventions to reduce sodium intake directly harm the people affected. This is true everywhere, but especially true in poorer countries with hot climates, where sodium-reduction programs have the greatest potential for harm.

(This is directly contrary to the position of the scientific establishment. I am well aware of this.)

The problem is that sodium is a necessary nutrient, but required intake varies significantly between people and between temperatures, because sw... (read more)

4
Joel Tan
1y
That's an interesting perspective! You're right that the scientific experts would disagree strongly on this, and to cite one of them: "While there is some controversy over the idea of a U or J-shaped curve for salt intake and cardiovascular outcomes, the more robust studies show that these use faulty evidence." Another expert adds to this, "In healthy adults, sodium is needed to sustain BP, but we don't observe a J-curve normally: there is sodium in all food, and the kidney is a great engine at holding on to sodium in low sodium settings, such that lower BP is basically almost always better)." I also don't think it's accurate to say that the evidence is observational. (a) Aburto et al's (2013) meta-analysis of RCTs and prospective cohort studies shows that a reduction in sodium intake significantly reduced resting systolic blood pressure by 3.39 mm Hg; while Ettehad et al's meta-analysis entirely of RCTs shows that every 10 mm Hg reduction in systolic blood pressure significantly reduced the risk of major cardiovascular disease events (relative risk: 0.8), coronary heart disease (relative risk: 0.83), stroke (relative risk: 0.73) and heart failure (relative risk: 0.73), leading to a significant 13% reduction in all-cause mortality). (b) Then there is the Strazzullo et al meta-analysis of both RCTs and population studies, showing that additional sodium consumption of 1880 mg/day leads to greater risk of CVD (relative risk: 1.14). On the sweating issue (and hence the associated concerns about exercise and whether people in hot climates will be hurt) - I don't think this is an unreasonable fear a prior, but the Lucko et al meta-analysis of RCTs suggests that 93% of dietary sodium is excreted via urine, so basically that should anchor our expectations that this isn't going to be a significant way in which sodium is lost (let alone to such an extent that it has bad health consequences).

People hate being taxed for doing things they like

It's much worse than that; in hotter climates, salt isn't a luxury, it's basic sustenance. Gandhi wasn't being figurative when he said "Next to air and water, salt is perhaps the greatest necessity of life."

3
Stan Pinsent
1y
I think Ghandi's point nods to the British Empire's policy of heavily taxing salt as a way of extracting wealth from the Indian population. For a time this meant that salt became very expensive for poor people and many probably died early deaths linked to lack of salt. However, I don't think anyone would suggest taxing salt at that level again! Like any food tax, the health benefits of a salt tax would have to be weighed against the costs of making food more expensive. You certainly wouldn't want it so high that poor people don't get enough of it.

My understanding is that they strongly prefer you do it between 5 and 7 pm in your local timezone, so that responding officers nominally working a 9-5 schedule can collect overtime payments.

It samples unread posts from a curated list, then when that list is empty samples weighted based on karma. Unfortunately if you read posts logged out, or on a previous version of the site, then old posts won't be marked-as-read so they'll come up again.

2
Larks
1y
Do you think if I clicked on every member of that list it would go away?

I didn't make that claim in the grandparent comment, and I don't know of any specific other deceptive statements in it. But, on consideration... yeah, there probably are. Most of the post is about internal details of FHI operations which I know little about and have no easy way to verify. The claim about the Apology is different in that it's easy to check; it seems reasonable to expect that if the most-verifiable part contains an overreach, then the less-verifiable parts probably do too.

In my experience, there's a pattern, in social attacks like this, where critics are persistently, consistently unwilling to restrain themselves to only making criticisms that are true, regardless of whether the true criticisms would have been enough. This is a big deal and should not be tolerated.

Are you claiming that there are other deceptive statements in this post?

reducing existential risk by .00001 percent to protect 1018 future humans

Very-small-probability of very-large-impact is a straw man. People who think AGI risk is an important cause area think that because they also  think that the probability is large.

6
Guy Raveh
1y
I don't see how that matters exactly? OP is talking about their effect, and I don't think any work on AI safety to date has lowered the chance of catastrophe by more than a tiny amount.

I roll to disbelieve on these numbers. "Multiple reports a week" would be >100/year, which from my perspective doesn't seem consistent with the combination of (1) the total number of reports I'm aware of being a lot smaller than that, and (2) the fact that I can match most of the cases in the Time article (including ones that had names removed) to reports I already knew about.

(It's certainly possible that there was a particularly bad week or two, or that you're getting filled in on some sort of backlog.)

I also don't believe that a law school, or any gro... (read more)

[comment deleted]1y36
14
1
[comment deleted]1y10
3
3

They aren't currently shown separately anywhere. I added it to the ForumMagnum feature-ideas repo but not sure whether we'll wind up doing it.

3
Nathan Young
1y
They are shown separately here: https://eaforum.issarice.com/userlist?sort=karma 
1
Pat Myron
1y
Is there a link to vote to show interest?

As a datum from the LessWrong side as a moderator, when the crossposting was first implemented, initially there were a bunch of crossposts that weren't doing well (from a karma perspective) and seemed to be making the site worse. To address this, we added a requirement that to crosspost from EAF to LW, you need 100 karma on LW. I believe the karma requirement is symmetrical: in order to crosspost an LW post onto EAF, you need 100 EAF karma

The theory being, a bit of karma shows that you probably have some familiarity with the crosspost-destination site cult... (read more)

2
Ivy Mazzola
1y
This was a clever solution. I didn't know this was a thing.
1
Pat Myron
1y
While I think some threshold barrier is a good idea, I don't think the UX makes it clear that's happening. I've never been able to successfully crosspost, and just realized this is probably why..

Suppose there's a spot in a sentence where either of two synonyms would be effectively the same. That's 1 bit of available entropy. Then a spot where either a period or a comma would both work; that's another bit of entropy.  If you compose a message and annotate it with 48 two-way branches like this, using a notation like spintax, then you can programmatically create 2^48 effectively-identical messages. Then if you check the hash of each, you have good odds of finding one which matches the 48-bit hash fragment.

(Fyi a hash of only 12 hex digits (48 bits) is not long enough to prevent retroactively composing a message that matches the hash-fragment, if the message is long enough that you can find 48 bits of irrelevant entropy in it.)

3
mako yass
1y
(Well I declare that the message is very short. What would 48bits of entropy, in grammatically and semantically correct text, look like? Edit: I guess, if I could assume I could think of 4 synonyms for every word in the paragraph, the paragraph would only have to be a bit over 24 words long for me to be able to find something. Fortunately, it's only 11 words long.)

One of the defining characteristics of EA is rejecting certain specific reasons for counting people unequally; in particular, under EA ideology, helping someone in a distant country is just as good as helping a nearby person by the same amout. Combined with the empirical fact that a dollar has much larger  effect when spent on carefully chosen interventions in poorer countries, this leads to EA emphazing on poverty-reduction programs in poor, mainly African countries, in contrast to non-EA philanthropy which tends to favor donations local to wherever ... (read more)

Yeah, no, this story is not overall plausible and I would bet at better than 50-50 odds that there's a major misrepresentation here regarding what happened. Option 1 is that a grant was approved pending due diligence, then pulled during the due diligence process. That would be mildly embarrassing, and would probably imply a grant evaluator somewhere didn't do their job, but it wouldn't be the scandal that this purports to be. Option 2 is that the letter of intent is an outright forgery.

5
Lumpyproletariat
1y
At the time of my writing this comment, the parent was at 25 karma and -31 agreement karma.  Seeing as Jim was absolutely correct, I think that the people who dismissed them out of hand should reflect on what manner of reasoning led them to do so. EDIT: posted this before I saw that Ic had already made the same point.
lc
1y10
6
1

This comment turned out to be entirely correct.

8
titotal
1y
  I think this would be a lot more than "mildly embarassing". It's an effective altruism organisation. They should not have had to wait for the "due diligence" phase to understand why donating a hundred grand to a far-right newspaper is not an effective cause.  Either someone is approving grants (and telling the grantees they have funding) without so much as doing a cursory google search on the grantees, or someone knew it was a nazi newspaper and still thought it was a worthy effective cause. It really doesn't help the case that Tegmarcks brother apparently wrote stories for the newspaper in question, so he at least could be expected to know what it is.  If the letter is genuine (and they have never denied that it is), then someone at FLI is either grossly incompetent or malicious. They need to address this ASAP. 

As a Swede who is somewhat familiar with the publication Expo, I would maybe put the risk of forgery of that document at <5%. They are specifically known for their digging journalism, and I would be very surprised if they screwed up something basic like that.

Also, wouldn't it be extremely strange behavior from FLI if that document actually was a forgery? Would be the go-to defense rather than what they are doing now.

Lots of the comments here are pointing at details of the markets and whether it's possible to profit off of knowing that transformative AI is coming. Which is all fine and good, but I think there's a simple  way to look at it that's very illuminating.

The stock market is good at predicting company success because there are a lot of people trading in it who think hard about which companies will succeed, doing things like writing documents about  those companies' target markets, products, and leadership. Traders who do a good job at this sort of ana... (read more)

[anonymous]1y24
9
1

The claim in the post (which I think is very good) is that we should have a pretty strong prior against anything which requires positing massive market inefficiency on any randomly selected proposition where there is lots of money money on the table. This suggests that you should update away from very short timelines. There's no assumption that markets are a "mystical source of information" just that if you bet against them you almost always lose. 

There's also a nice "put your money where you mouth is" takeaway from the post, which AFAIK few short timelines people are  doing. 

5
Yonatan Cale
1y
(Even if for some reason you're wrong for the case of transformative AI specifically, your  comment still made me smarter, so thanks! :) )

It doesn't seem all that relevant to me whether traders have a probability like that in their heads. Whether they have a low probability or are not thinking about it, they're approximately leaving money on the table in a short-timelines world, which should be surprising. People have a large incentive to hunt for important probabilities they're ignoring.

Of course, there are examples (cf. behavioral economics) of systemic biases in markets. But even within behavioral economics, it's fairly commonly known that it's hard to find ongoing, large-scale biases in financial markets.

I think a fair number of market participants may have something like a probability estimate for transformative AI within five years and maybe even ten. (For example back when SoftBank was throwing money at everything that looked like a tech company, they justified it with a thesis something like "transformative AI is coming soon", and this would drive some other market participants to think about the truth of that thesis and its implications even if they wouldn't otherwise.) But I think you are right that basically no market participants have a probability... (read more)

I find it hard to believe that the number of traders who have considered crazy future AI scenarios is negligible. New AI models, semiconductor supply chains, etc. have gotten lots of media and intellectual attention recently. Arguments about transformative AGI are public. Many people have incentives to look into them and think about their implications.

I don't think this post is decisive evidence against short timelines. But neither do I think it's a "trap" that relies on fully swallowing EMH. I think there're deeper issues to unpack here about why much of the world doesn't seem to put much weight on AGI coming any time soon.

Definitely agree with this.  Consider for instance how markets seemed to have reacted strangely / too slowly to the emergence of the Covid-19 pandemic, and then consider how much more familiar and predictable is the idea of a viral pandemic compared to the idea of unaligned AI:

The coronavirus was x-risk on easy mode: a risk (global influenza pandemic) warned of for many decades in advance, in highly specific detail, by respected & high-status people like Bill Gates, which was easy to understand with well-known historical precedents, fitting into s

... (read more)

Narrow the thought experiment to "cancer that banks aren't able to find out about" and the thought experiment goes through fine. And US institutions are strongly supportive of secrecy, in general, so I think this is actually the typical case (at least for people who are young enough that seeking a large loan is not itself suspicious).

That does not get the thought experiment through. 

Mortgage rates for older people are higher. And if mortgage holders die, the mortgage must still be paid by the executor of an estate, which is a disincentive for anyone with a bequest motive. 

I'm sure that we can find some corner case where young cancer victims with no friends/family or no regard for their friends/family act otherwise. But this hardly seems important for the point that you -- yes, you -- can make money by implementing the trades suggested in this piece. Which is the claim that Yudkowsky is using the cancer victim analogy to argue against.

I don't have an inside view on specific homeless charities in SF, but I do have an outside-view impression that the amount of money going in, contrasted with the results being achieved, implies that something is more-wrong-than-usual. That is, I think money is probably not just being spent inefficiently, but outright embezzled. It should be possible to identify individual charities to donate to doing good work, but EA's usual charity-oversight methodology is typically aimed at catching dumb ideas, not at catching outright fraud.

If a charity asks a donor for financial audits, the response is going to be: We don't do paperwork for you. You do paperwork for us. Followed by giving the donation to someone else.

4
Miguel
1y
No Jim, financial audits are done quarterly, semi-annual or annual. Non-profit orgs doesn't need to request financial audits for them - if the donor is complying with IFRS or American Standards, they just need to submit or share the most recent copy of the Audited Financial Statements. If they do not have it, that is a major red flag.

I think the EA community should have an organization full of private investigators and forensic accountants tucked away somewhere, with a broad mandate to look for problems focused on EA-connected places.

But FTX is a for-profit corporation. The fact that they're making donations doesn't give anyone the power to impose accounting standards on them; that capability is only held by government institutions with power of subpoena.

4
Miguel
1y
My proposition is for prevention of future fraud. There are internal audit reviews on policies and procedures for acquisition of grants / donations that I believe now will become a good safeguard if any EA org is assessing where the money is coming from. Requiring regular Audited Financial Statements from donors and be assessed by management doesn't seem illegal or needing government approval. Investigations after issues have taken place seems like too much of a late reaction in my opinion. Prevention through regular checks and balance is still the best approach in my opinion.

I heard the same claim, from a different source: that SBF did something unethical at Alameda Research prior to founding FTX, that some EAs had left Alameda saying that SBF was unethical and no one should work with him, and that there were privately circulated warnings to this effect. (The person I heard this from hasn't spoken publicly about it yet as far as I know. They are someone with no previous or current involvement with FTX or Alameda Research, who I think is reporting honestly and is well positioned to have heard such things.)

(EDIT: others along the rumor-path via which I heard this have now spoken on this thread, in greater detail than I have; so this comment is a duplicate report and should not be coutned.)

The story of how it got that way is that agree/disagree was originally built as an experiment-with-voting-systems feature, with the key component of that being that different posts can have different voting systems without conflict. (See eg this thread for another voting system we tried.)

The main reason for hesitation (other ForumMagnum developers might not agree) is that I'm not really convinced that 2-axis voting is the right voting system, and expanding it from a posts-have-different-voting-systems context to a whole-site-is-2-axis context limits the op... (read more)

Phil Torres is not currently a deadname. A deadname is a name that someone is no longer using in their public persona, but the name Phil is displayed prominently on their web page. Searching Amazon for Phil Torres finds their books, searching Amazon for Emile Torres does not.

Moreover, it's basically impossible to understand what's going on here without knowing that Phil and Emile are the same person, and asking the original poster to avoid mentioning the name-mapping is asking them to obfuscate.

-5
Davidmanheim
2y

Erring in the direction of they/them is fine, but I object to pronoun-policing when it's done on another person's behalf, and the pronoun that was used is one that the person is currently advertising as correct in any prominent place (such as at the bottom of this page).

9[anonymous]2y
The person in question is banned from this forum, is what I gather, is that not correct? So they are completely unable to chime in as we all so graciously debate what is or isn't allowed for them. I mean, we could literally write a text book on the concept of other while we're at it I suppose, or we could just err on the side of caution as we should do in all circumstances concerning how we choose to exert power or others or not, no?

Phil/Émile changed name, but did not change pronouns. A Facebook post I saw indicated that the name change was to avoid confusion with a different Phil Torres, who is an entomologist. While their Twitter profile specifies they/them pronouns, their Facebook profile says he/him (both profiles have the updated name). I think under any reasonable etiquette standard, that means either pronoun is acceptable unless they directly say otherwise.

1[comment deleted]2y
2[anonymous]2y
Their twitter profile, which is what is being posted here, uses they/them. I see absolutely no reason to not err on the side of caution, do you? This OP also used their deadname in place of their name and continues to use their deadname in a "formerly" known as context, which is generally not acceptable, unless explicitly noted as such. And I corrected the OP on this too, they haven't changed it.  PS, I am queer, nonbinary. If someone with a greater personal experience wants to chime in here, please go ahead I would love to defer. I had zero expectation that I'd have to have these discussions on a forum for altruists nor that I would basically be cyber bullied for correcting people with directness (this is so dumb)...

If your group hasn't done the Petrov Day ritual, this is a good place to start. (There are several variants to choose from, and it's a living tradition, so making your own variant is encouraged, though obviously not required.)

22. If successful, in five years the impact of our project will be... 

Eighty percent of California water utilities will be implementing leading water efficiency programmatic practices to bring down the water consumption of urban areas and we are able to implement these analytics in any area of the world faced with an aridifying climate.

I was under the impression that California's water problems are almost entirely agricultural, meaning that improving urban-area water use in particular won't help because that's not where the water is going. I'm not ent... (read more)

1
Locke
2y
Urban and Ag not as separable as one might think. Urban areas need to eat.  Also note the way water rights are done it would take a massive political earthquake to really change some of the underlying assumptions there.  We really should be doing both/and rather than just saying "hey ag do more because you're a bigger part of the problem." Here's a good intro to the water supply picture for the EA crowd: https://slatestarcodex.com/2015/05/11/california-water-you-doing/

If there's an arms race dynamic, it's probably a disaster no matter who wins. Having room to delay for late-stage alignment experiments is the barest minimum requirement in order for humanity to have any chance of survival. So the best case is to not have an arms race at all. The next-best thing is for the organization that wins to be the sort of organization that could stop at the brink for late-stage alignment research, if its leader decided to, and for it to have a stable leader who's sane enough to make that decision. Then maximize the size of the gap ... (read more)

I don't think biodiversity is good, in fact it's probably bad. If we replaced natural ecosystems with curated ones, we'd have much less of a problem with zoonotic transmissions creating new diseases and pests, probably better nature aesthetics, and maybe some ability to use ecosystems to remove pollutants that are currently hard to get rid of.

It's important to remember that most of the US population was exposed to a significant amount of environmentalist propaganda as children, before they were able to think critically, and that there were falsehoods embedded in that propaganda. Ecosystems do not spontaneously turn into wastelands when they're perturbed; they mostly turn into boring forests and things like that.

2
Guy Raveh
2y
Strongly downvoted. I think there are so many bad assumptions going into this comment. From thinking the short list of issues you mentioned are the only, or most important, thing diverse ecosystems provide, to thinking we could reliably plan and run them when we don't even know most species involved in them, to the idea that natural aesthetics are somehow inferior to human aesthetics? Not to mention thinking smart people are only worrying about biodiversity because of claimed partly-false propaganda in one country.

Too abstract. Second-order effects are mostly not mysterious, they're things which you can predict, not perfectly but usually well enough, if you look at the right parts of the world and apply some economics. If someone's arguing against an intervention because they think the intervention will have bad second-order effects, then the followup question is whether those effects are real and how big they are. Answering that means looking at the details.

That said, in my experience, if you come across an argument between two people, and one person is saying Something Must Be Done, and the other person is saying You Fool That Will Backfire For Reasons I Will Explain, the second person is almost always right.

I think this is a decent idea given a small reframe. Rather than thinking of it as earmarking the cash for a specific purpose, treating it like an unenforced restriction, instead think of the cash transfers as having an opportunity to provide information attached, and try to provide good information. Ie, instead of "this cash transfer is for X", say "this cash transfer comes with a small pamphlet with several purchase ideas X,Y,Z". This framing is more cooperative, and fails more gracefully if the recommendations are bad.

1
brb243
2y
oooh! yes, there are all the options you can invest, if you get this you get bunch of great preventive healthcare, or this! - you see clean air further healthcare - what about fortified flour - mmm micronutrients - education for a child - would have probably thought of that already but sit with them to study makes a difference - chlorine for clean water is a great deal - oh a bednet - travel to a clinic - would have thought about it but maybe a list of times when that can be an especially great idea - etc (ok got the idea)

Conventional wisdom in the business world is that brick-and-mortar retail (and brick-and-mortar books in particular) is a declining business, because it can't compete effectively with online stores. So I'm really skeptical of whether this business is financial viable to survive without continuous infusions of external cash, let alone with enough slack to do things that aren't profit motivated.

What that means practice is you haven't actually pinned the cost down to the right order of magnitude. Neither of the business sales you mentioned is compar... (read more)

2
Benjamin_Todd
2y
Good point the operating losses could add up to arbitrarily high amounts. Not being able to publish under the brand also seems like maybe a deal breaker.

How does XR weigh costs and benefits?
Does XR consider tech progress default-good or default-bad?

The core concept here is differential intellectual progress. Tech progress can be bad if it reorders the sequence of technological developments to be worse, by making a hazard precede its mitigation. In practice, that applies mainly to gain-of-function research and to some, but not all, AI/ML research. There are lots of outstanding disagreements between rationalists about which AI/ML research is good vs bad, which, when zoomed in on, reveal disagreements about A... (read more)

I think the common factor, among forms of advice that people are hesitant to give, is that they involve some risk. So if, for example, I recommend a supplement and it causes a health problem, or I recommend a stock and it crashes, there's some worry about blame. If the supplement helps, or the stock rises, there's some possibility of getting credit; but, in typical social relationships, the risk of blame is a larger concern than the possibility of credit, which makes people more than optimally hesitant.

I was somewhat confused by the scale using Categorizing Variants of Goodhart's Law as an example of a 100mQ paper, given that the LW post version of that paper won the 2018 AI Alignment Prize ($5k), which makes a pretty strong case for it being "a particularly valuable paper" (1Q, the next category up). I also think this scale significantly overvalues research agendas and popular books relative to papers. I don't think these aspects of the rubric wound up impacting the specific estimates made here, though.

2
JackM
3y
I'm not sure on the exact valuation research agendas should get, but I would argue that well thought-through research agendas can be hugely beneficial in that they can reorient many researchers in high-impact directions, leading them to write papers on topics that are vastly more important than they might have otherwise chosen.   I would argue an 'ingenious' paper written on an unimportant topic isn't anywhere near as good as a 'pretty good' paper written on a hugely important topic.
2
NunoSempere
3y
Yes, the scale is under construction, and you're not the first person to mention that the specific research agenda mentioned is overvalued.
  • From people I know that have gotten vaccines in the Bay, it sounds like appointments have been booked quickly after being posted / there aren’t a bunch of openings.

This was true in February, but I think it's no longer true, due to a combination of the Johnson & Johnson vaccine being added and the currently-eligible groups  being mostly done. Berkeley Public Health sent me this link which shows hundreds of available appointment slots over the next days at a dozen different Bay Area locations.

(EDIT: See below, the map I linked to may be mixing vacci... (read more)

1
tcheasdfjkl
3y
There are not currently a bunch of openings (probably because eligibility just expanded).
3
ao
3y
I don't see anything at that link now. I expect those openings were taken? I'm in Facebook groups where people in California and the Bay Area specifically are searching for appointments (and leftover vaccines); it seems pretty difficult for eligible people to find an appointment.
1
billzito
3y
[Edit: the link appears to be misleading, see my follow-up question below] That does seem quite compelling, thanks for sharing. I think I'll check again in a couple days as it sounds like CA is opening up slots to 4.4M more people tomorrow. Perhaps there is an effect where right at the start of a new section of people, there are some extra slots?

The core thesis here seems to be:

I claim that [cluster of organizations] have collectively decided that they do not need to participate in tight feedback loops with reality in order to have a huge, positive impact. 

There are different ways of unpacking this, so before I respond I want to disambiguate them. Here are four different unpackings:

  1. Tight feedback loops are important, [cluster of organizations] could be doing a better job creating them, and this is a priority. (I agree with this. Reality doesn't grade on a curve.)
  2. Tight feedback loops are impor
... (read more)

Thank you for this thoughtful reply! I appreciate it, and the disambiguation is helpful. (I would personally like to do as much thinking-in-public about this stuff as seems feasible.)

I mean a combination of (1) and (4). 

I used to not believe that (4) was a thing, but then I started to notice (usually unconscious) patterns of (4) behavior arising in me, and as I investigated further I kept noticing more & more (4) behavior in me, so now I think it's really a thing (because I don't believe that I'm an outlier in this regard).

 

(4) is the interes

... (read more)

Should competent EAs be pursuing local political offices?

There's no simple yes or no answer. A) Competence is multi-dimensional, and B) there are some types of competencies that would make me discourage someone from running for office and doing other things instead.

There are also several other factors besides competency that go into whether someone is a good fit for running for office, among them things like personal history, location, temperament, and the badness of whomever is currently occupying the office in question.

I think some EAs should pursue local political office, and who those EAs are should be deter... (read more)

Looking at ads and introducing ads into your environment is not free, it's mildly harmful. If you offered me 1 cent per ad to display ads in my browser, I would refuse. The money going to charity doesn't change that.

LessWrong has a sidebar which makes the link to All Posts much more prominent; it looks like EA Forum hasn't adopted that yet, but it would probably help.

Were you under the impression that I was disagreeing with the sodium-reduction guidelines because I was merely unaware that they existed? This is an area of considerable controversy.

6
Hauke Hillebrandt
5y
No, my model of your view is that you were aware of the guidelines, but believe that sodium-reduction guidelines are on net harmful. Am I correct? Both too little and too much salt is bad, but based on two the more recent meta-analyses I linked above, that deal with this controversy in the Ioannidis article you linked, I think the WHO salt reduction guidelines are on net good. As a rule, public health messaging should be tell people to watch their salt intake to reduce their blood pressure, because: * average salt consumption is much higher than the WHO recommends * it will likely increase due to profit motives absent policy interventions * many more people with high blood pressure will benefit than people with low blood pressure would be harmed because they adapt a very low sodium diet on the basis of sodium reduction guidelines
Quitting smoking, alcohol, salt, and sugar is also hard–they are quite addictive.

For most people, cutting salt intake is harmful, not helpful. Salt isn't new to human diets, and it isn't a matter of addiction; it's just a necessary nutrient.

Sugar can be harmful, but only insofar as it crowds out other calorie sources which are better. When people try to cut sugar, they often fail (and mildly harm themselves) because they neglect to replace it.

3
Hauke Hillebrandt
5y
I agree that, if sustained throughout the lifecourse, then moderate consumption of salt and sugar is not harmful. I wrote this sentence with metabolic syndrome in mind - this affects very many people as they get older. On salt: I agree that salt is essential and not new to human diets, and that for the majority of people reducing sodium by a lot is harmful. However, many people have high blood pressure and should avoid excessive sodium consumption [see study, study]. Also, many scholars argue that salt can be described as addictive [see 'Salt addiction hypothesis'] and some implicate it in making food hyperpalatable (also see 'The Hungry Brain' by a former OPP consultant). On sugar: the WHO recommends a reduced intake of free sugars throughout the lifecourse. Not sure what you mean that people harm themselves (you mean that they mess with their basal metabolic rate? I think this is only happens in extreme cases, not when, say, just cutting out sugar sweetened beverages). Thinking about this in terms of the reversal test, recommending increasing sugar intake (which is happening anyway) does not make sense to me on average.

Post-mortem donation is fine, but being asked to sign up for kidney donation would be severely trust-destroying for me.

This happens to posts by accounts which have never posted before; established accounts (at least one post or comment) don't have to wait. This was instituted on both LW and EA Forum because of a steady stream of bot-generated spam.

That doesn't seem especially relevant to the question of whether first-world consumers should buy farmed or wild-caught fish; the amount caught form fisheries is set by regulations, not by demand, so consumer demand does not, on the margin, increase or decrease overfishing.

I doubt this makes a difference. Most of the market treats farmed and wild-caught fish as close substitutes, the supply of wild-caught fish is inelastic, and the supply of farmed fish is highly elastic. So if you switch from farmed to wild-caught fish, you are probably affecting market prices in a way which causes one other person to make the opposite change.

5
Cullen
5y
Brian Tomasik has written about this here:
7
Avi Norowitz
5y
I agree with Jim's comment above. As the graph here suggests, the supply of wild fish appears to have been flat since the 90s, and the increase in demand has been met by the supply of farmed fish. So I think it's likely that consumption of wild fish will just cause someone else to consume farmed fish instead. With regard to fish oil: Most of it originates from small wild fish such as anchovies. There's an entire industry dedicated to harvesting fish oil and fishmeal, and most of it is used as feed for carnivorous farmed fish like salmon. Fish oil seems to be mostly supply constrained as well, and the aquaculture industry is responding by feeding carnivorous fish more plant oils. I've written about this here and here, and should probably move these to the EA Forum now that less polished posts are encouraged.

There are three additional premises required here. The first is that your own use of funds from investments must be significantly better than that of of other shareholders of the companies you invest in. The second is that the growth rate of the companies you invest in must exceed the rate at which the marginal cost of doing good increases, due to low-hanging fruit getting picked and due to lost opportunities for compounding. The third is that the growth potential of AI companies isn't already priced in, in a way that reduces your expected returns to be no better than index funds.

The first of these premises is probably true. The second is probably false. The third is definitely false.

1
Milan_Griffes
5y
Michael Dickens engages with something similar in this post. In the case of transformative, slow-takeoff AI driven by for-profit companies, it seems reasonable to assume that the economy is going to grow faster than the marginal cost of doing good, because gains from AI seem unlikely to be evenly distributed. I'm unsure whether AI company growth is adequately priced in or not. If it is, I think the argument still holds. The returns from an index fund could be very high in the case of transformative AI, so holding index funds would probably be better than donating now in that case. See also the discussion here & here.
Load more