All of peterhartree's Comments + Replies

An easy win for hard decisions.

To make things even faster: create a bookmark for "doc.new" and give it the name "nd". Then you can just type "nd" and press "enter".

An easy win for hard decisions.

Quick way to create a Google Doc—browse to this web address:

doc.new

I've found that having a quick way to create new docs makes me more likely to do so.

(To set your typing focus to the browser address bar, press CMD+L or CTRL+L)

4peterhartree13d
To make things even faster: create a bookmark for "doc.new" and give it the name "nd". Then you can just type "nd" and press "enter".
What Are Your Software Needs?

Personally I'm looking for someone to help me build a simple plugin for the Obsidian note taking app.

The plugin should generate a list of links to notes that match criteria I specify.

Spec here. If you'd enjoy getting paid to make this for me, please send me a DM.

Interesting, thanks Aaron. This result seems roughly in line with the fraction of EAG attendees who wear EA t-shirts.

For what it's worth, this thread reminded me of Joshua Greene arguing that the brand of "utilitarianism" is so bad as to be a lost cause.

Greene suggests "deep pragmatism" for the rebrand.

1dan.pandori9mo
This is an excellent point. Making a new name for an existing concept is generally bad, but utilitarianism (and the associated 'for the greater good') has been absolutely savaged in public perception.

I didn't downvote. For what it's worth, the main negative reaction I had was:

  1. The use of the EA lightbulb as an example of a great symbol. Personally, I've always found it kind of amateurish and cringe. I think mainly because it combines two very tired cliches (a lightbulb to represent "ideas" and a heart to represent "altruism"? Really?!).

I suppose I could also complain that:

  1. The claim that "symbolism is important" is not substantiated. Generically that seems true, but the claim that utilitarianism the philosophical idea needs a good/better symbol an

... (read more)
4Aaron Gertler9mo
Re: (6) — I was curious to see how many others felt the same way, so I ran a quick poll [https://www.facebook.com/groups/477649789306528/posts/1041499916254843/?__cft__[0]=AZUTrUD9atoKFY5x115JWI-gyJdnyxtd0r4E_Cuv9d8uoPgP2ai39NzOq0U8BQIVmDWbAEsYoO-kTvg9AvKJkJ8JQD06RRI5D69GOfawoyUGo460F6Q1A-UjVtUkOG2V-WS-RutOeRk723NqkIjrLhOM34AefsSCO9e6TpHP1x7nB54dbaUc08hlCQPCSsTPYSM&__tn__=%2CO%2CP-R] . There's obviously inherent bias to doing this on a group full of people interested in EA, but it does seem like the logo is pretty well-liked. (Not that this invalidates your view, of course.)
1Dan Hendrycks9mo
We also crosspost on reddit to attract people who know how to design logos. I would need evidence against the claim that imagery basically worthless. Even in academic ML research, it's a fatal mistake not to spend at least a day thinking about how to visualize the paper's concepts. This mistake is nonetheless common.
The Future of Humanity & The Methods of Ethics: A discussion of Bostrom, Sidgwick and Scheffler (Thursday 22 July, 6:30pm UK)

The salon recording is now available here: https://www.youtube.com/watch?v=E-uSDlbSXjw

A written summary is below:

We began by considering utilitarianism—particularly Sidgwick's "pleasure as desirable consciousness" hedonism—as a starting point for thinking about what matters. The value and failure modes of attempts at legibility and abstraction were discussed, as were different ideas about what makes a "meaningful" life. While accepting that utilitarian principles have, historically, supported important reforms (such as the de-criminalisation of homosexua

... (read more)
Betting on the best case: higher end warming is underrepresented in research

Somewhat related: Robert S. Pindyck on The Use and Misuse of Models for Climate Policy.

In short, his take (a) seems consistent with the claim that research and policy attention is being misallocated and (b) suggests a mechanism that might partly explain the misallocation.

Abstract (my emphasis):

In recent articles I have argued that integrated assessment models (IAMs) have flaws that make them close to useless as tools for policy analysis. IAM-based analyses of climate policy create a perception of knowledge and precision that is illusory and can fool poli

... (read more)
3jasonrwang9mo
Hi Peter, I think there is a nuance two disentangle – IAMs are confusingly used in two contexts: 1) models that try to optimize for some economically efficient social cost of carbon (and by proxy, climate policies), and 2) those that attempt to simulate plausible futures. Where Pindyck's writing was mostly about the first, most IPCC work regards the second. Still, I absolutely agree with Pindyck's criticisms – they translate well over to the second category. We tried to cover that massive topic in the section about deeply uncertain factors and Robust Decision-Making, but with so few words, it is difficult to fully address those points. A further tricky aspect is that of the second type of models, the scenarios that are explored can themselves be misleading or they can limit analysis. Lamontagne et al. (2018) [https://agupubs.onlinelibrary.wiley.com/doi/full/10.1002/2017EF000701] show how a full factorial of input scenarios illustrates that many combinations can lead to the same outcomes. When we don't know how the future will actually unfold, the chosen archetypes clout our assessment. Another aspect is that the inputs themselves are actually outputs of others. Pielke Jr. and Ritchie (2021) discuss that in Distorting the view of our climate future: The misuse and abuse of climate pathways and scenarios [https://doi.org/10.1016/j.erss.2020.101890]. All of this is to say: yes, I agree that all models are wrong, but some are useful. Our argument is mainly that through various approaches, we have some understanding of plausible temperature outcomes. We should prepare for all of these to be robustly prepared.
What would you do if you had half a million dollars?

A post on this topic, discussing the Thiel Fellowship, Entrepreneur First, and other attempts: https://www.strangeloopcanon.com/p/on-medici-and-thiel

What would you do if you had half a million dollars?
  1. In some cases yes, but only when they were working on specific projects that I expected to be legible and palatable to EA funders. Are there places I should be sending people who I think are very promising to be considered for very low strings personal development / freedom-to-explore type funding?
7Aaron Gertler9mo
The Infrastructure and LTF Funds have both (I think) made grants of the "help someone develop/save money" variety, mostly for students and new academics, but also in a couple of cases for people who were trying to pick up particular skills. I also think it's perfectly valid for people to post questions about this kind of thing on the Forum — "I'm doing work X, which I think is very valuable, but I don't see an obvious way to get it funded to the point where I'd be financially secure — any suggestions?"
What would you do if you had half a million dollars?

A thought that motivates my other comments on this thread: reviewing my GWWC donations a while ago, I realised that if I suddenly had lots of money, one of the first questions I would ask myself is "what friends and acquaintances should I fund?". To an outsider this kind of thing can look like rather non-altruistic nepotism, but from the inside it seems like betting on the opportunities that you are unusually able to see. I think it actually is the latter, at least sometimes. My impression is that for profit investors do a lot of "nepotistic investing", but I suspect that values like altruism and impartiality and transparency (as well as constraints of charitable legal status) make EA funders reluctant to go hard on this method.

What would you do if you had half a million dollars?

I would consider starting some kind of "major achievement" prize scheme.

Roughly, the idea I have in mind is to give large no-strings-attached lump sums to individuals who have:

(a) done exceptionally valuable work at non-trivial personal cost (e.g. massive salary sacrifice)

(b) a high likelihood of continuing to do extremely valuable work.

The aims would be:

(i) to help such figures become personally "set for life" in the way that successful startup founders sometimes do.

(ii) to improve the personal incentive structure faced by people considering EA careers.

Th... (read more)

2Aaron Gertler10mo
On (1): Have you encouraged any of these people to apply for existing sources of funding within EA? Did any of them do so successfully? On (3): The most prominent EA-run "major achievement prize" is the Future of Life Award [https://futureoflife.org/future-of-life-award], which has been won by people well outside of EA. That's one way to avoid bad press — and perhaps some extremely impactful people would become more interested in EA as a result of winning a prize? (Though I expect you'd want to target mid-career people, rather than people who have already done their life's work in the style of the FLA.)
What would you do if you had half a million dollars?

I would consider allocating at least $100K to trying my own version of something like Tyler Cowen's Emergent Ventures.

2peterhartree10mo
A post on this topic, discussing the Thiel Fellowship, Entrepreneur First, and other attempts:https://www.strangeloopcanon.com/p/on-medici-and-thiel [https://www.strangeloopcanon.com/p/on-medici-and-thiel]
All Possible Views About Humanity's Future Are Wild

Thanks for the post.

You give a gloss definition of "wild":

we should be doing a double take at any view that we live in such a special time

Could you say a bit more on this? I can think of many different reasons one might do a double take—my impression is that you're thinking of just a few of them, but I'm not sure exactly which.

4Holden Karnofsky10mo
I'm not sure I can totally spell it out - a lot of this piece is about the raw intuition that "something is weird here." One Bayesian-ish interpretation is given in the post: "The odds that we could live in such a significant time seem infinitesimal; the odds that Holden is having delusions of grandeur (on behalf of all of Earth, but still) seem far higher." In other words, there is something "suspicious" about a view that implies that we are in an unusually important position - it's the kind of view that seems (by default) more likely to be generated by wishful thinking, ego, etc. than by dispassionate consideration of the facts. There's also an intuition along the lines of "If we're really in such a special position, I'd think it would be remarked upon more; I'm suspicious of claims that something really important is going on that isn't generally getting much attention." I ultimately think we should bite these bullets (that we actually in the kind of special position that wishful thinking might falsely conclude we're in, and that there actually is something very important going on that isn't getting commensurate attention). I think some people imagine they can avoid biting these bullets by e.g. asserting long timelines to transformative AI; this piece aims to argue that doesn't work.
Podcast: Sharon Hewitt Rawlette on metaethics and utilitarianism

Thank you for this, Gus and Sharon.

This interview presented one of the most compelling cases for a hedonistic theory of value that I've heard, shifting my credence from “quite low” to “hmm, ok, maaaaybe”.

Some bits that stood out:

  1. Pluralistic conception of positive and negative experiences, i.e. experiences differ in intensity but also in character (so we can recognise fundamental differences between bodily pleasure, love, laughter, understanding, etc).

  2. Hedonism can solve the epistemic problem that haunts moral realism, by saying that we directly experi

... (read more)
1Gus Docker1y
Glad you found it useful Peter I've been very influenced by Hewitt's meta-ethics myself, and I highly recommend reading her PhD. You can also get it in book form here: https://www.amazon.com/Feeling-Value-Grounded-Phenomenal-Consciousness/dp/1534768017/ [https://www.amazon.com/Feeling-Value-Grounded-Phenomenal-Consciousness/dp/1534768017/]
Help me find the crux between EA/XR and Progress Studies
  1. How do you give advice?

PS (Tyler Cowen): I think about what I believe, then I think about what it's useful for people to hear, and then I say that.

EA: I think about what I believe, and then I say that. I generally trust people to respond appropriately to what I say.

7Max_Daniel1y
I think it's more like:
Help me find the crux between EA/XR and Progress Studies

So here's a list of claims, with a cartoon response from someone that represents my impression of a typical EA/PS view on things (insert caveats here):

  1. Some important parts of "developed world" culture are too pessimistic. It would be very valuable to blast a message of definite optimism, viz. "The human condition can be radically improved! We have done it in the past, and we can do it again. Here are some ideas we should try..."

PS: Strongly agree. The cultural norms that support and enable progress are more fragile than you think.

EA: Agree. But, as an

... (read more)
3peterhartree1y
1. How do you give advice?
Progress studies vs. longtermist EA: some differences

@ADS: I enjoyed your discussion of (1), but I understood the conclusion to be :shrug:. Is that where you're at?

Generally, my impression is that differential technological development is an idea that seems right in theory, but the project of figuring out how to apply it in practice seems rather... nascent. For example:

(a) Our stories about which areas we should speed up and slow down are pretty speculative, and while I'm sure we can improve them, the prospects for making them very robust seem limited. DTD does not free us from the uncomfortable position of ... (read more)

1AppliedDivinityStudies7mo
Hey sorry for the late reply, I missed this. Yes, the upshot from that piece is "eh". I think there are some plausible XR-minded arguments in favor of economic growth, but I don't find them overly compelling. In practice, I think the particulars matter a lot. If you were to say, make progress on a cost-effective malaria vaccine, it's hard to argue that it'll end up bringing about superintelligence in the next couple decades. But it depends on your time scale. If you think AI is more on a 100 year time horizon, there might be more reason to be worried about growth. R.e. DTD, I think it depends way more than EA/XR people tend to think on global coordination.
Progress studies vs. longtermist EA: some differences

To your Beckstead paraphrase, I'll add Tyler's recent exchange with Joseph Walker:

Cowen: Uncertainty should not paralyse you: try to do your best, pursue maximum expected value, just avoid the moral nervousness, be a little Straussian about it. Like here’s a rule on average it’s a good rule we’re all gonna follow it. Bravo move on to the next thing. Be a builder.

Walker: So… Get on with it?

Cowen: Yes ultimately the nervous Nellie’s, they’re not philosophically sophisticated, they’re over indulging their own neuroticism, when you get right down to it. So

... (read more)
Progress studies vs. longtermist EA: some differences

I've gotten several responses on this, and find them all fairly limited. As far as I can tell, the Progress Studies community just is not reasoning very well about x-risk.

Have you pressed Tyler Cowen on this?

I'm fairly confident that he has heard ~all the arguments that the effective altruism community has heard, and that he has understood them deeply. So I default to thinking that there's an interesting disagreement here, rather than a boring "hasn't heard the arguments" or "is making a basic mistake" thing going on.

In a recent note, I sketched a coupl... (read more)

4AppliedDivinityStudies1y
Thanks! I think that's a good summary of possible views. FWIW I personally have some speculative pro-progress anti-xr-fixation views, but haven't been quite ready to express them publicly, and I don't think they're endorsed by other members of the Progress community. Tyler did send me some comments acknowledging that the far future is important in EV calculations. His counterargument is more or less that this still suggests prioritizing the practical work of improving institutions, rather than agonizing over the philosophical arguments. I'm heavily paraphrasing there. He did also mention the risk of falling behind in AI development to less cautious actors. My own counterargument here is that this is a reason to either a) work very quickly on developing safe AI and b) work very hard on international cooperation. Though perhaps he would say those are both part of the Progress agenda anyway. Ultimately, I suspect much of the disagreement comes down to there not being a real Applied Progress Studies agenda at the moment, and if one were put together, we would find it surprisingly aligned with XR aims. I won't speculate too much on what such a thing might entail, but one very low hanging recommendations would be something like: * Ramp up high skilled immigration (especially from China, especially in AI, biotech, EE and physics) by expanding visa access and proactively recruiting scientists
1peterhartree1y
@ADS: I enjoyed your discussion [https://applieddivinitystudies.com/pdf/moral_progress.pdf] of (1), but I understood the conclusion to be :shrug:. Is that where you're at? Generally, my impression is that differential technological development [https://forum.effectivealtruism.org/tag/differential-progress] is an idea that seems right in theory, but the project of figuring out how to apply it in practice seems rather... nascent. For example: (a) Our stories about which areas we should speed up and slow down are pretty speculative, and while I'm sure we can improve them, the prospects for making them very robust seem limited. DTD does not free us from the uncomfortable position of having to "take a punt" on some extremely high stakes issues. (b) I'm struggling to think of examples of public discussion of how "strong" a version of DTD we should aim for in practice (pointers, anyone?).
Progress studies vs. longtermist EA: some differences

Some questions to which I suspect key figures in Effective Altruism and Progress Studies would give different answers:

a. How much of a problem is it to have a mainstream culture that is afraid of technology, or that underrates its promise?

b. How does the rate of economic growth in the West affect the probability of political catastrophe, e.g. WWIII?

c. How fragile are Enlightenment norms of open, truth-seeking debate? (E.g. Deutsch thinks something like the Enlightenment "tried to happen" several times, and that these norms may be more fragile than we think... (read more)

2Max_Daniel1y
Thank you, very helpful! Not directly a discussion, but Richard Ngo and Jeremy Nixon's summary of some of Peter Thiel's relevant views [https://www.lesswrong.com/posts/Xqcorq5EyJBpZcCrN/thiel-on-progress-and-stagnation] might also be interesting in this context.
Progress studies vs. longtermist EA: some differences

Bear in mind that I'm more familiar with the Effective Altruism community than I am with the Progress Studies community.

Some general impressions:

  1. Superficially, key figures in Progress Studies seem a bit less interested in moral philosophy than those in Effective Altruism. But, Tyler Cowen is arguably as much a philosopher as he is an economist, and he co-authored Against The Discount Rate (1992) with Derek Parfit. Patrick Collison has read Reasons and Persons, The Precipice, and so on, and is a board member of The Long Now Foundation. Peter Thiel takes

... (read more)
Some quick notes on "effective altruism"

Thanks for writing this, Jonas.

For what it's worth:

  1. I share the concerns you mentioned.
  2. I personally find the name "effective altruism" somewhat cringe and off-putting. I've become used to it over the years but I still hear it and feel embarrassed every now and then.
  3. I find the label "effective altruist" several notches worse: that elicits a slight cringe reaction most of the time I encounter it.
  4. The names "Global priorities" and "Progress studies" don't trigger a cringe reaction for me.
  5. I have a couple of EA-inclined acquaintances who have told me they were pu
... (read more)
Supportive scepticism in practice

Jess & Michelle: thanks for this excellent post. Three remarks I'd like to add:

1. We all need support, but individuals vary considerably in the kind of support they need in order to flourish. A kind of support that works well for one person might feel patronising, frustrating or stifling to another, or cold, distant and uncaring to a third. To be effectively supportive, we must be sensitive to individual needs.

2. Being supportive is difficult, so individuals in the community should help others support them. If the support you're getting from the commu... (read more)

0Michelle_Hutchinson7y
Thanks Peter, great points!
EAs on RSS and Reddit!

Nice work. We'll hopefully add this to the 80,000 Hours blog sidebar during Q1.

What should an effective altruist be committed to?

I think there are two questions here:

  1. How much of my time should I allocate to altruistic endeavour?
  2. How should I use the time I’ve allocated to altruistic endeavour?

Effective altruism clearly has a lot to say about (2). It could also say some things about (1), but I don’t think it is obliged to. These look like questions that can be addressed (fairly) independently of one another.

An aside: a weakness of the unqualified phrase “do the most good” is that it blurs these two questions. If you characterise the effective altruist as someone who wants to “do... (read more)

Generic good advice: do intense exercise often

I strongly endorse what Rob said. Intense regular exercise is by far the best productivity and general well-being hack I've ever adopted. In my experience, once you get into it, it's the opposite of a chore.

Second-best hack (for focus): Pomodoro Technique (use Tadam as your timer (Mac only))

Third-best hack (for reducing stress): regular mindfulness meditation (about 10 minutes / day, use Headspace to learn the basics).