Posts

Sorted by New

Topic Contributions

Comments

How to approach the dilemma between the law of unintended consequences and the consequences of nonaction?

Too abstract. Second-order effects are mostly not mysterious, they're things which you can predict, not perfectly but usually well enough, if you look at the right parts of the world and apply some economics. If someone's arguing against an intervention because they think the intervention will have bad second-order effects, then the followup question is whether those effects are real and how big they are. Answering that means looking at the details.

That said, in my experience, if you come across an argument between two people, and one person is saying Something Must Be Done, and the other person is saying You Fool That Will Backfire For Reasons I Will Explain, the second person is almost always right.

Labeling cash transfers to solve charcoal-related problems?

I think this is a decent idea given a small reframe. Rather than thinking of it as earmarking the cash for a specific purpose, treating it like an unenforced restriction, instead think of the cash transfers as having an opportunity to provide information attached, and try to provide good information. Ie, instead of "this cash transfer is for X", say "this cash transfer comes with a small pamphlet with several purchase ideas X,Y,Z". This framing is more cooperative, and fails more gracefully if the recommendations are bad.

New EA Cause Area: Run Blackwell's Bookstore

Conventional wisdom in the business world is that brick-and-mortar retail (and brick-and-mortar books in particular) is a declining business, because it can't compete effectively with online stores. So I'm really skeptical of whether this business is financial viable to survive without continuous infusions of external cash, let alone with enough slack to do things that aren't profit motivated.

What that means practice is you haven't actually pinned the cost down to the right order of magnitude. Neither of the business sales you mentioned is comparable; B&N is an online store and an eReader brand, Books-A-Million was sold in 2014 and since then appears to have diversified into a lot of other businesses. More importantly, the main cost isn't the sale price, it's taking responsibility for the operational losses. This doesn't tell me what order of magnitude that cost will be.

Building a publisher could be a thing, but owning this retail chain is strictly negative for that. You definitely aren't getting the relevant trademarks out of the deal and will not be able to publish under the brand, ever, unless you separately buy the trademarks from the 2007 buyer, and if you're going that route you're shopping for a publishing house not a bookstore chain.

(Copy-pasted from pre-publication comments on a Google Docs doc)

Help me find the crux between EA/XR and Progress Studies

How does XR weigh costs and benefits?
Does XR consider tech progress default-good or default-bad?

The core concept here is differential intellectual progress. Tech progress can be bad if it reorders the sequence of technological developments to be worse, by making a hazard precede its mitigation. In practice, that applies mainly to gain-of-function research and to some, but not all, AI/ML research. There are lots of outstanding disagreements between rationalists about which AI/ML research is good vs bad, which, when zoomed in on, reveal disagreements about AI timeline and takeoff forecasts, and about the feasibility of particular AI-safety research directions.

Progress in medicine (especially aging- and cryonics-related medicine) is seen very positively (though there's a deep distrust of the existing institutions in this area, which bottoms out in a lot of rationalists doing their own literature reviews and wishing they could do their own experiments).

On a more gut/emotional level, I would plug my own Petrov Day ritual as attempting to capture the range of it: it's a mixed bag with a lot of positive bits, and some terrifying bits, and the core message is that you're supposed to be thinking about both and not trying to oversimplify things.

What would moral/social progress actually look like?

This seems like a good place to mention Dath Ilan, Eliezer's fictional* universe which is at a much higher level of moral/social progress, and the LessWrong Coordination/Cooperation tag, which has some research pointing in that general direction.

What does XR think about the large numbers of people who don't appreciate progress, or actively oppose it?

I don't think I know enough to speak about the XR community broadly here, but as for me personally: mostly frustrated that their thinking isn't granular enough. There's a huge gulf between saying "social media is toxic" and saying "it is toxic for the closest thing to a downvote button to be reply/share", and I try to tune out/unfollow the people whose writings say things closer to the former.

Being Vocal About What Works

I think the common factor, among forms of advice that people are hesitant to give, is that they involve some risk. So if, for example, I recommend a supplement and it causes a health problem, or I recommend a stock and it crashes, there's some worry about blame. If the supplement helps, or the stock rises, there's some possibility of getting credit; but, in typical social relationships, the risk of blame is a larger concern than the possibility of credit, which makes people more than optimally hesitant.

Relative Impact of the First 10 EA Forum Prize Winners

I was somewhat confused by the scale using Categorizing Variants of Goodhart's Law as an example of a 100mQ paper, given that the LW post version of that paper won the 2018 AI Alignment Prize ($5k), which makes a pretty strong case for it being "a particularly valuable paper" (1Q, the next category up). I also think this scale significantly overvalues research agendas and popular books relative to papers. I don't think these aspects of the rubric wound up impacting the specific estimates made here, though.

When to get a vaccine in the Bay Area as a young healthy person
  • From people I know that have gotten vaccines in the Bay, it sounds like appointments have been booked quickly after being posted / there aren’t a bunch of openings.

This was true in February, but I think it's no longer true, due to a combination of the Johnson & Johnson vaccine being added and the currently-eligible groups  being mostly done. Berkeley Public Health sent me this link which shows hundreds of available appointment slots over the next days at a dozen different Bay Area locations.

(EDIT: See below, the map I linked to may be mixing vaccine and PCR-test appointments together in a way that confused me.)

The core thesis here seems to be:

I claim that [cluster of organizations] have collectively decided that they do not need to participate in tight feedback loops with reality in order to have a huge, positive impact. 

There are different ways of unpacking this, so before I respond I want to disambiguate them. Here are four different unpackings:

  1. Tight feedback loops are important, [cluster of organizations] could be doing a better job creating them, and this is a priority. (I agree with this. Reality doesn't grade on a curve.)
  2. Tight feedback loops are important, and [cluster of organizations] is doing a bad job of creating them, relative to organizations in the same reference class. (I disagree with this. If graded on a curve, we're doing pretty well. )
  3. Tight feedback loops are important, but [cluster of organizations] has concluded in their explicit verbal reasoning that they aren't important. (I am very confident that this is false for at least some of the organizations named, where I have visibility into the thinking of decision makers involved.)
  4. Tight feedback loops are important, but [cluster of organizations] is implicitly deprioritizing and avoiding them, by ignoring/forgetting discouraging information, and by incentivizing positive narratives over truthful narratives.

(4) is the interesting version of this claim, and I think there's some truth to it. I also think that this problem is much more widespread than just our own community, and fixing it is likely one of the core bottlenecks for civilization as a whole.

I think part of the problem is that people get triggered into defensiveness; when they mentally simulate (or emotionally half-simulate) setting up a feedback mechanism, if that feedback mechanism tells them they're doing the wrong thing, their anticipations put a lot of weight on the possibility that they'll be shamed and punished, and not much weight on the possibility that they'll be able to switch to something else that works better. I think these anticipations are mostly wrong; in my anecdotal observation, the actual reaction organizations get to poor results followed by a pivot is usually at least positive about the pivot, at least from the people who matter. But getting people who've internalized a prediction of doom and shame to surface those models, and do things that would make the outcome legible, is very hard.

(Meta: Before writing this comment I read your post in full. I have previously read and sat with most, but not all, of the posts linked to here. I did not reread them during the same sitting I read this comment.)

AMA: Elizabeth Edwards-Appell, former State Representative

Should competent EAs be pursuing local political offices?

Support AMF in tab 4 a cause so it reaches its goal.

Looking at ads and introducing ads into your environment is not free, it's mildly harmful. If you offered me 1 cent per ad to display ads in my browser, I would refuse. The money going to charity doesn't change that.

Load More