jimrandomh

Posts

Sorted by New

Wiki Contributions

Comments

Help me find the crux between EA/XR and Progress Studies

How does XR weigh costs and benefits?
Does XR consider tech progress default-good or default-bad?

The core concept here is differential intellectual progress. Tech progress can be bad if it reorders the sequence of technological developments to be worse, by making a hazard precede its mitigation. In practice, that applies mainly to gain-of-function research and to some, but not all, AI/ML research. There are lots of outstanding disagreements between rationalists about which AI/ML research is good vs bad, which, when zoomed in on, reveal disagreements about AI timeline and takeoff forecasts, and about the feasibility of particular AI-safety research directions.

Progress in medicine (especially aging- and cryonics-related medicine) is seen very positively (though there's a deep distrust of the existing institutions in this area, which bottoms out in a lot of rationalists doing their own literature reviews and wishing they could do their own experiments).

On a more gut/emotional level, I would plug my own Petrov Day ritual as attempting to capture the range of it: it's a mixed bag with a lot of positive bits, and some terrifying bits, and the core message is that you're supposed to be thinking about both and not trying to oversimplify things.

What would moral/social progress actually look like?

This seems like a good place to mention Dath Ilan, Eliezer's fictional* universe which is at a much higher level of moral/social progress, and the LessWrong Coordination/Cooperation tag, which has some research pointing in that general direction.

What does XR think about the large numbers of people who don't appreciate progress, or actively oppose it?

I don't think I know enough to speak about the XR community broadly here, but as for me personally: mostly frustrated that their thinking isn't granular enough. There's a huge gulf between saying "social media is toxic" and saying "it is toxic for the closest thing to a downvote button to be reply/share", and I try to tune out/unfollow the people whose writings say things closer to the former.

Being Vocal About What Works

I think the common factor, among forms of advice that people are hesitant to give, is that they involve some risk. So if, for example, I recommend a supplement and it causes a health problem, or I recommend a stock and it crashes, there's some worry about blame. If the supplement helps, or the stock rises, there's some possibility of getting credit; but, in typical social relationships, the risk of blame is a larger concern than the possibility of credit, which makes people more than optimally hesitant.

Relative Impact of the First 10 EA Forum Prize Winners

I was somewhat confused by the scale using Categorizing Variants of Goodhart's Law as an example of a 100mQ paper, given that the LW post version of that paper won the 2018 AI Alignment Prize ($5k), which makes a pretty strong case for it being "a particularly valuable paper" (1Q, the next category up). I also think this scale significantly overvalues research agendas and popular books relative to papers. I don't think these aspects of the rubric wound up impacting the specific estimates made here, though.

When to get a vaccine in the Bay Area as a young healthy person
  • From people I know that have gotten vaccines in the Bay, it sounds like appointments have been booked quickly after being posted / there aren’t a bunch of openings.

This was true in February, but I think it's no longer true, due to a combination of the Johnson & Johnson vaccine being added and the currently-eligible groups  being mostly done. Berkeley Public Health sent me this link which shows hundreds of available appointment slots over the next days at a dozen different Bay Area locations.

(EDIT: See below, the map I linked to may be mixing vaccine and PCR-test appointments together in a way that confused me.)

The core thesis here seems to be:

I claim that [cluster of organizations] have collectively decided that they do not need to participate in tight feedback loops with reality in order to have a huge, positive impact. 

There are different ways of unpacking this, so before I respond I want to disambiguate them. Here are four different unpackings:

  1. Tight feedback loops are important, [cluster of organizations] could be doing a better job creating them, and this is a priority. (I agree with this. Reality doesn't grade on a curve.)
  2. Tight feedback loops are important, and [cluster of organizations] is doing a bad job of creating them, relative to organizations in the same reference class. (I disagree with this. If graded on a curve, we're doing pretty well. )
  3. Tight feedback loops are important, but [cluster of organizations] has concluded in their explicit verbal reasoning that they aren't important. (I am very confident that this is false for at least some of the organizations named, where I have visibility into the thinking of decision makers involved.)
  4. Tight feedback loops are important, but [cluster of organizations] is implicitly deprioritizing and avoiding them, by ignoring/forgetting discouraging information, and by incentivizing positive narratives over truthful narratives.

(4) is the interesting version of this claim, and I think there's some truth to it. I also think that this problem is much more widespread than just our own community, and fixing it is likely one of the core bottlenecks for civilization as a whole.

I think part of the problem is that people get triggered into defensiveness; when they mentally simulate (or emotionally half-simulate) setting up a feedback mechanism, if that feedback mechanism tells them they're doing the wrong thing, their anticipations put a lot of weight on the possibility that they'll be shamed and punished, and not much weight on the possibility that they'll be able to switch to something else that works better. I think these anticipations are mostly wrong; in my anecdotal observation, the actual reaction organizations get to poor results followed by a pivot is usually at least positive about the pivot, at least from the people who matter. But getting people who've internalized a prediction of doom and shame to surface those models, and do things that would make the outcome legible, is very hard.

(Meta: Before writing this comment I read your post in full. I have previously read and sat with most, but not all, of the posts linked to here. I did not reread them during the same sitting I read this comment.)

AMA: Elizabeth Edwards-Appell, former State Representative

Should competent EAs be pursuing local political offices?

Support AMF in tab 4 a cause so it reaches its goal.

Looking at ads and introducing ads into your environment is not free, it's mildly harmful. If you offered me 1 cent per ad to display ads in my browser, I would refuse. The money going to charity doesn't change that.

I find this forum increasingly difficult to navigate

LessWrong has a sidebar which makes the link to All Posts much more prominent; it looks like EA Forum hasn't adopted that yet, but it would probably help.

The most cost-efficient way to convert money into personal health

Were you under the impression that I was disagreeing with the sodium-reduction guidelines because I was merely unaware that they existed? This is an area of considerable controversy.

The most cost-efficient way to convert money into personal health
Quitting smoking, alcohol, salt, and sugar is also hard–they are quite addictive.

For most people, cutting salt intake is harmful, not helpful. Salt isn't new to human diets, and it isn't a matter of addiction; it's just a necessary nutrient.

Sugar can be harmful, but only insofar as it crowds out other calorie sources which are better. When people try to cut sugar, they often fail (and mildly harm themselves) because they neglect to replace it.

Load More