peterhartree

3420 karmaJoined Working (6-15 years)Reykjavik, Islande
twitter.com/peterhartree
Interests:
Forecasting

Bio

Now: TYPE III AUDIO; Independent study.

Previously: 80,000 Hours (2014-15; 2017-2021) Worked on web development, product management, strategy, internal systems, IT security, etc. Read my CV.

Also: Inbox When Ready; Radio Bostrom; The Valmy; Comment Helper for Google Docs.

Comments
256

Topic contributions
4

I also don't see any evidence for the claim of EA philosophers having "eroded the boundary between this kind of philosophizing and real-world decision-making".

Have you visited the 80,000 Hours website recently?

I think that effective altruism centrally involves taking the ideas of philosophers and using them to inform real-world decision-making. I am very glad we’re attempting this, but we must recognise that this is an extraordinarily risky business. Even the wisest humans are unqualified for this role. Many of our attempts are 51:49 bets at best—sometimes worth trying, rarely without grave downside risk, never without an accompanying imperative to listen carefully for feedback from the world. And yes—diverse, hedged experiments in overconfidence also make sense. And no, SBF was not hedged anything like enough to take his 51:49 bets—to the point of blameworthy, perhaps criminal negligence.

A notable exception to the “we’re mostly clueless” situation is: catastrophes are bad. This view passes the “common sense” test, and the “nearly all the reasonable takes on moral philosophy” test too (negative utilitarianism is the notable exception). But our global resource allocation mechanisms are not taking “catastrophes are bad” seriously enough. So, EA—along with other groups and individuals—has a role to play in pushing sensible measures to reduce catastrophic risks up the agenda (as well as the sensible disaster mitigation prep).

(Derek Parfit’s “extinction is much worse than 99.9% wipeout” claim is far more questionable—I put some of my chips on this, but not the majority.)

As you suggest, the transform function from “abstract philosophical idea” to “what do” is complicated and messy, and involves a lot of deference to existing norms and customs. Sadly, I think that many people with a “physics and philosophy” sensibility underrate just how complicated and messy the transform function really has to be. So they sometimes make bad decisions on principle instead of good decisions grounded in messy common sense.

I’m glad you shared the J.S. Mill quote.

…the beliefs which have thus come down are the rules of morality for the multitude, and for the philosopher until he has succeeded in finding better

EAs should not be encouraged to grant themselves practical exception from “the rules of morality for the multitude” if they think of themselves as philosophers. Genius, wise philosophers are extremely rare (cold take: Parfit wasn’t one of them).

To be clear: I am strongly in favour of attempts to act on important insights from philosophy. I just think that this is hard to do well. One reason is that there is a notable minority of “physics and philosophy” folks who should not be made kings, because their “need for systematisation” is so dominant as to be a disastrous impediment for that role.

In my other comment, I shared links to Karnofsky, Beckstead and Cowen expressing views in the spirit of the above. From memory, Carl Shuman is in a similar place, and so are Alexander Berger and Ajeya Cotra.

My impression is that more than half of the most influential people in effective altruism are roughly where they should be on these topics, but some of the top “influencers”, and many of the ”second tier”, are not.

(Views my own. Sword meme credit: the artist currently known as John Stewart Chill.)

1. My current process

I check a couple of sources most days, at random times during the afternoon or evening. I usually do this on my phone, during breaks or when I'm otherwise AFK. My phone and laptop are configured to block most of these sources during the morning (LeechBlock and AppBlock).

When I find something I want to engage with at length, I usually put it into my "Reading inbox" note in Obsidian, or into my weekly todo list if it's above the bar.

I check my reading inbox on evenings and weekends, and also during "open" blocks that I sometimes schedule as part of my work week. 

I read about 1/5 of the items that get into my reading inbox, either on my laptop or iPad. I read and annotate using PDF Expert, take notes in Obsidian, and use Mochi for flashcards. My reading inbox—and all my articles, highlights and notes—are synced between my laptop and my iPad.


2. Most useful sources

(~Daily)

  • AI News (usually just to the end of the "Twitter recap" section). 
  • Private Slack and Signal groups.
  • Twitter (usually just the home screen, sometimes my lists).
  • Marginal Revolution.
  • LessWrong and EA Forum (via the 30+ karma podcast feeds; I rarely check the homepages)

(~Weekly)

  • Newsletters: Zvi, CAIS.
  • Podcasts: The Cognitive Revolution, AXRP, Machine Learning Street Talk, Dwarkesh.

3. Problems

I've not given the top of the funnel—the checking sources bit—much thought. In particular, I've never sat down for an afternoon to ask questions like "why, exactly, do I follow AI news?", "what are the main ways this is valuable (and disvaluable)?" and "how could I make it easy to do this better?". There's probably a bunch of low-hanging fruit here.

Twitter is... twitter. I currently check the "For you" home screen every day (via web browser, not the app). At least once a week I'm very glad that I checked Twitter—because I found something useful, that I plausibly wouldn't have found otherwise. But—I wish I had an easy way to see just the best AI stuff. In the past I tried to figure something out with Twitter lists and Tweetdeck (now "X Pro"), but I've not found something that sticks. So I spend most of my time with the "For you" screen, training the algorithm with "not interested" reports, an aggressive follow/unfollow/block policy, and liberal use of the "mute words" function. I'm sure I can do better...

My newsletter inbox is a mess. I filter newsletters into a separate folder, so that they don't distract me when I process my regular email. But I'm subscribed to way too many newsletters, many of which aren't focussed on AI, so when I do open the "Newsletters" folder, it's overwhelming. I don't reliably read the sources which I flagged above, even though I consider them fairly essential reading (and would prefer to read them to many of the things I do, in fact, read). 

I addictively over-consume podcasts, at the cost of "shower time" (diffuse/daydream mode) or higher-quality rest. 

I don't make the most of LLMs. I have various ideas for how LLMs could improve my information discovery and engagement, but on my current setup—especially on mobile—the affordances for using LLMs are poor.

I miss things that I'd really like to know about. I very rarely miss a "big story", but I'd guess I miss several things that I'd really like to know about each week, given my particular interests.

I find out about many things I don't need to know about.

I could go on...

Thanks for your feedback.

For now, we think our current voice model (provided by Azure) is the best available option all things considered. There are important considerations in addition to human-like delivery (e.g. cost, speed, reliability, fine-grained control).

I'm quite surprised that an overall-much-better option hasn't emerged before now. My guess is that something will show up later in 2024. When it does, we will migrate.

There are good email newsletters that aren't reliably read.

Readit.bot turns any newsletter into a personal podcast feed.

TYPE III AUDIO works with authors and orgs to make podcast feeds of their newsletters—currently Zvi, CAIS, ChinAI and FLI EU AI Act, but we may do a bunch more soon.

I think that "awareness of important simple facts" is a surprisingly big problem.

Over the years, I've had many experiences of "wow, I would have expected person X to know about important fact Y, but they didn't".

The issue came to mind again last week:

My sense is that many people, including very influential folks, could systematically—and efficiently—improve their awareness of "simple important facts".

There may be quick wins here. For example, there are existing tools that aren't widely used (e.g. Twitter lists; Tweetdeck). There are good email newsletters that aren't reliably read. Just encouraging people to make this an explicit priority and treat it seriously (e.g. have a plan) could go a long way.

I may explore this challenge further sometime soon.

I'd like to get a better sense of things like:

a. What particular things would particular influential figures in AI safety ideally do?
b. How can I make those things happen?

As a very small step, I encouraged Peter Wildeford to re-share his AI tech and AI policy Twitter lists yesterday. Recommended.

Happy to hear from anyone with thoughts on this stuff (p@pjh.is). I'm especially interested to speak with people working on AI safety who'd like to improve their own awareness of "important simple facts".

Bret Taylor and Larry Summers (members of the current OpenAI board) have responded to Helen Toner and Tasha McCauley in The Economist.

The key passages:

Helen Toner and Tasha McCauley, who left the board of Openai after its decision to reverse course on replacing Sam Altman, the CEO, last November, have offered comments on the regulation of artificial intelligence (AI) and events at OpenAI in a By Invitation piece in The Economist.

We do not accept the claims made by Ms Toner and Ms McCauley regarding events at OpenAI. Upon being asked by the former board (including Ms Toner and Ms McCauley) to serve on the new board, the first step we took was to commission an external review of events leading up to Mr Altman’s forced resignation. We chaired a special committee set up by the board, and WilmerHale, a prestigious law firm, led the review. It conducted dozens of interviews with members of OpenAI's previous board (including Ms Toner and Ms McCauley), Openai executives, advisers to the previous board and other pertinent witnesses; reviewed more than 30,000 documents; and evaluated various corporate actions. Both Ms Toner and Ms McCauley provided ample input to the review, and this was carefully considered as we came to our judgments.

The review’s findings rejected the idea that any kind of ai safety concern necessitated Mr Altman’s replacement. In fact, WilmerHale found that “the prior board’s decision did not arise out of concerns regarding product safety or security, the pace of development, OpenAI's finances, or its statements to investors, customers, or business partners.”

Furthermore, in six months of nearly daily contact with the company we have found Mr Altman highly forthcoming on all relevant issues and consistently collegial with his management team. We regret that Ms Toner continues to revisit issues that were thoroughly examined by the WilmerHale-led review rather than moving forward.

Ms Toner has continued to make claims in the press. Although perhaps difficult to remember now, OpenAI released ChatGPT in November 2022 as a research project to learn more about how useful its models are in conversational settings. It was built on GPT-3.5, an existing ai model which had already been available for more than eight months at the time.

Andrew Mayne points out that “the base model for ChatGPT (GPT 3.5) had been publicly available via the API since March 2022”.

On (1): it's very unclear how ownership could be compatible with no financial interest.

Maaaaaybe (2) explains it. That is: while ownership does legally entail financial interest, it was agreed that this was only a pragmatic stopgap measure, such that in practice Sam had no financial interest.

For context:

  1. OpenAI claims that while Sam owned the OpenAI Startup Fund, there was “no personal investment or financial interest from Sam”.
  2. In February 2024, OpenAI said: “We wanted to get started quickly and the easiest way to do that due to our structure was to put it in Sam's name. We have always intended for this to be temporary.”
  3. In April 2024 it was announced that Sam no longer owns the fund.

Sam didn't inform the board that he owned the OpenAI Startup Fund, even though he constantly was claiming to be an independent board member with no financial interest in the company.

Sam has publicly said he has no equity in OpenAI. I've not been able to find public quotes where Sam says he has no financial interest in OpenAI (does anyone have a link?).

Load more