anonymous6

123Joined Mar 2022

Comments
19

Apart from a few things (rs) I actually think his accent might be phonetically closer to an American accent than most UK accents. It didn't seem hard to understand.

"Business Adventures" by John Brooks is a collection of midcentury New Yorker articles about business, obviously very old-fashioned but they are really quite good. There's something to be said for learning about:

  • a short squeeze in shares of the Piggly-Wiggly supermarket chain (founder went broke, later tried to make Amazon Just Walk Out but with 1950s punch card technology)
  • the first insider trading lawsuit
  • the first "Big Tech" information technology company that spun off from university research and promoted liberal political causes (Xerox)
  • the earliest cases of employees being sued over noncompete/intellectual property agreements

Among many other interesting but less obviously relevant topics.

José Figueres Ferrer was victorious in the Costa Rican civil war, after which he appointed himself head of the provisional junta.

Sounds like trouble — but he only ruled for 18 months, during which time he abolished the army and extended the franchise to women and nonwhite people. Then he stepped down and there have been fair elections since.

In meditation there are the jhanas, which include states of intense physical pleasure (like a runner's high). I learned how to do this, but the pleasure gets boring -- though not less intense -- after about 10 minutes or so and I feel tempted to go do something which is less pleasurable (and not less painful or offering greater future benefits). (And you'd think it would be habit forming, but in fact I have a hard time keeping up my meditation habit...)

What this taught me is that I don't always want to maximize pleasure, even if I can do it with zero cost. I thus have a hard time making sense of what hedonists mean by "pleasure".

If it's just positive emotions and physical pleasure, then that means sometimes hedonists would want to force me to do things I don't want to do, with no future benefit to me, which seems odd. (I guess a real bullet-biting hedonist could say that I have a kind of akrasia, but it's not very persuasive.)

It also seems that, sometimes,  hedonists who say "pleasure" mean some kind of more subtle multidimensional notion of subjective experiences of human flourishing. Giving a clear definition of this seems beyond anybody now living, but in principle it seems like a safer basis for hedonistic utilitarianism, and bothers me a lot less.

But now I'm not so sure, because I think most of your arguments here also go through even for a complicated, subtle notion of "pleasure" that reflects all our cherished human values.

I personally think Distill just had way-too-high standards for the communication quality of the papers they wanted to publish. They also specifically wanted work that "distills" important concepts, rather than the traditional novel/beat-SOTA ML paper.

I think I get the strategic point of this -- they wanted to create some prestige to become a prestigious venue, even though they were publishing work that traditionally "doesn't count". But it seems like it failed and they might have been better off with lower standards and/or allowing more traditional ML research.

You could still do a good ML paper with some executable code, animations, and interactive diagrams. Maybe you can get most of the way there by auto-processing a Jupyter notebook and then cleaning it up a little. It might have mediocre writing and ugly diagrams, but that's probably fine and in many cases could still be an improvement on a PDF.

Unlike poverty and disease, many of the harms of the criminal justice system are due to intentional cruelty. People are raped, beaten, and tortured every day in America's jails and prisons. There are smaller cruelties, too, like prohibiting detainees from seeing visitors in order to extort more money out of their families.

To most people, seeing people doing intentional evil (and even getting rich off it) seems viscerally worse than harm due to natural causes.

I think from a ruthless expected utility perspective, this probably is correct in the abstract, i.e. all else equal, murder is worse than equivalently painful accidental death. However I doubt taking it into account (and even being very generous about things like "illegible corrosion to the social fabric") would importantly change your conclusions about $/QALY in this case, because all else is not equal.

But, I think the distinction is probably worth making, as it's a major difference between criminal justice reform and the two baselines for comparison.

Good call -- I added a little more detail about these two discussions.

A thought about some of the bad dynamics on social media that occurred to me:

Some well-known researchers in the AI Ethics camp have been critical of the AI Safety camp (or associated ideas like longtermism). It seems to be true that by contrast, AI Safety researchers are neutral-to-positive on AI Ethics. So there is some asymmetry.

However, there are certainly mainstream non-safety ML researchers who are harshly (typically unfairly) critical of AI Ethics. And there are also AI-Safety/EA-adjacent popular voices (like Scott Alexander) who criticize AI Ethics. Then on top of this there are fairly vicious anonymous trolls on Twitter.

So some AI Ethics researchers reasonably feel like they're being unfairly attacked and that people socially connected to EA/AI Safety are in the mix, which may naturally lead to hostility even if it isn't completely well-directed.

https://facctconference.org is the major conference in the area. It's interdisciplinary -- mix of technical ML work, social/legal scholarship, and humanities-type papers.

Some big names: Moritz Hardt, Arvind Narayanan, and Solon Barocas wrote a textbook https://fairmlbook.org and they and many of their students are important contributors. Cynthia Dwork is another big name in fairness, and Cynthia Rudin in explainable/interpretable ML. That's a non-exhaustive list but I think is a decent seed for a search through coauthors.

I believe there is in fact important technical overlap in the two problem areas. For example, https://causalincentives.com is research from a group of people who see themselves as working in AI safety. Yet people in the fair ML community are also very interested in causality, and study it for similar reasons using similar tools.

I think much of the expressed animosity is only because the two research communities seem to select for people with very different preexisting political commitments (left/social justice vs. neoliberal), and they find each other threatening for that reason.

On the other hand, there are differences. An illustrative one is that fair ML people care a lot about the fairness properties of linear models, both in theory and in practice right now. Whereas it would be strange if an AI Safety person cared at all about a linear model -- they're just too small and nothing like the kind of AI that could become unsafe.

My feeling about the phrase "Mastermind Group" is fairly negative. I have heard people mention it from time to time and knew it was from Napoleon Hill, who was kind of the inventor of the self-help/self-improvement book. The phrase is something I associate,  I think reasonably, with the whole culture of self-improvement seminars and content that descends from Hill -- what used to be authors/speakers like Tony Robbins and is now also really big on YouTube. The kind of thing where someone is going to sell you a course on how to get rich, and the way to get rich is to learn to successfully sell a course on how to get rich.

Take this for what it's worth -- just one person's possibly skewed gut reaction to this phrase. I think the idea of peers meeting in a group to support each other remains sound.

Load More