Nathan Young

Project manager/Director @ Frostwork (web app agency)
17625 karmaJoined Working (6-15 years)London, UK
nathanpmyoung.com

Bio

Participation
4

Builds web apps (eg viewpoints.xyz) and makes forecasts. Currently I have spare capacity. 

How others can help me

Talking to those in forecasting to improve my forecasting question generation tool

Writing forecasting questions on EA topics.

Meeting EAs I become lifelong friends with.

How I can help others

Connecting them to other EAs.

Writing forecasting questions on metaculus.

Talking to them about forecasting.

Sequences
1

Moving In Step With One Another

Comments
2567

Topic contributions
20

Interesting take. I don't like it. 

Perhaps because I like saying overrated/underrated.

But also because overrated/underrated is a quick way to provide information. "Forecasting is underrated by the population at large" is much easier to think of than "forecasting is probably rated 4/10 by the population at large and should be rated 6/10"

Over/underrated requires about 3 mental queries, "Is it better or worse than my ingroup thinks" "Is it better or worse than my ingroup thinks?" "Am I gonna have to be clear about what I mean?"

Scoring the current and desired status of something requires about 20 queries "Is 4 fair?" "Is 5 fair" "What axis am I rating on?" "Popularity?" "If I score it a 4 will people think I'm crazy?"...

Like in some sense your right that % forecasts are more useful than "More likely/less likely" and sizes are better than "bigger smaller" but when dealing with intangibles like status I think it's pretty costly to calculate some status number, so I do the cheaper thing.

 

Also would you prefer people used over/underrated less or would you prefer the people who use over/underrated spoke less? Because I would guess that some chunk of those 50ish karma are from people who don't like the vibe rather than some epistemic thing. And if that's the case, I think we should have a different discussion.

I guess I think that might come from a frustration around jargon or rationalists in general. And I'm pretty happy to try and broaden my answer from over/underrated - just as I would if someone asked me how big a star was and I said "bigger than an elephant". But it's worth noting it's a bandwidth thing and often used because giving exact sizes in status is hard. Perhaps we shouldn't have numbers and words for it, but we don't.

I dunno, I think that sounds galaxy-brained to me. I think that giving numbers is better than not giving them and that thinking carefully about the numbers is better than that. I don't really buy your second order concerns (or think they could easily go in the opposite direction)

Yeah, I think you make good points. I think that forecasts are useful on balance, and then people should investigate them. Do you think that forecasting like this will hurt the information landscape on average? 

Personally, to me, people engaged in this forecasting generally seem more capable of changing their minds. I think the AI2027 folks would probably be pretty capable of acknowledging they were wrong, which seems like a healthy thing. Probably more so than the media and academic? 

Seems like a lot of specific, quite technical criticisms.

Sure, so we agree?

(Maybe you think I'm being derogatory, but no, I'm just allowing people who scroll down to the comments to see that I think this article contains a lot of specific, quite technical criticisms. If in doubt, I say things I think are true.)

Some thoughts:

  • I agree that the Forum's speech norms are annoying. I would prefer that people weren't banned for being impolite even white making useful points.
  • I agree in a larger sense that EA can be innervating, sapping one's will for conflict with many small touches
  • I agree that having one main funder and wanting to please them seems unhelpful
  • I've always thought you are a person of courage and integrity

On the other hand:

  • I think if you are struggling to convince EAs that is some evidence. I too am in the "it's very likely not the end of the world but still worth paying attention to" camp. You haven't convinced me.
  • Your personal tweets have felt increasingly high conflict and less epistemically careful. I think I muted you over a year ago. I guess you hate this take, but it's true.

I don't expect this to change your mind, but maybe there are reasons you aren't convincing very informed people besides us being blind to reality. I admit I'd enjoy being rich, but I'm not particularly convinced I'll go try and work for a lab. And I don't think I bend my opinions towards Coefficient, either, and have never been funded by them.

I think you're right to sat that a large proportion of the public will come to agree with you. But also I expect a large proportion of the public to give talking points about water and energy use and that disney has a moral right to their characters for as long as copyright says they do. This doesn't seem good to me. I sense it seems fine to you.

I don't think this is all our war. I guess that you do. If so, we disagree. I will help to the extent I agree with you and be flatfooded and confused to the extent that I don't. I get that that's annoying. I feel some of that annoyance myself at ways I disagree with the community. But to me it feels part of being in a community. I have to convince people. And you have't convinced me.

I feel this quite a lot:

  • The need to please OpenPhil etc
  • The sense of inness or outness based on cause area
  • The lack of comparing notes openly
  • That one can "just have friends"

And so I think Holly's advice is worth reading, because it's fine advice.

Personally I feel a bit differently. I have been hurt by EA, but I still think it's a community of people who care about doing good per $. I don't know how we get to a place that I think is more functional, but I still think it's worth trying for the amout of people and resources attached to this space. But yes, I am less emotionally envolved than once I was.

Seems like a lot of specific, quite technical criticisms. I don't edorse Thorstadts work in general (or not endorse it), but often when he cites things I find them valuable. This has enough material that it seems worth reading. 

I think my main disagreement is here:

“It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so” … I think the rationalist mantra of “If It’s Worth Doing, It’s Worth Doing With Made-Up Statistics” will turn out to hurt our information landscape much more than it helps.

I weakly disagree here. I am very much in the "make up statistics and be clear about that" camp. I disagree a bit with AI 2027 in that they don't always label their forecasts with their median (which it turns out wasn't 2027 ??). 

I think that it is worth having and tracking individual predictions, though I acknowledge the risk that people are going to take them too seriously. That said, after some number of forecasters I think this info does become publishable (Katja Grace's AI survey contains a lot of forecasts and is literally published).

My comments are on LessWrong (see link below) but I thought I'd give you lot a chance to comment also.

@Gavriel Kleinwaks (who works in this area) Gives her recommendation. When asked whether she "backed" them:

I do! (Not in the financial sense, tbc.) But just want to flag that my endorsement is confounded. Basically, Aerolamp uses the design of the nonprofit referenced in my post, OSLUV, and most of my technical info about far-UV comes from a) Aerolamp cofounder Viv Belenky and b) OSLUV. I've been working with Viv and OSLUV for a couple of years, long before the founding of Aerolamp, and trust their information, but you should know that my professional opinion is highly correlated with theirs—1Day Sooner doesn't have the equipment to do independent testing.

I think it's the ideal outcome that a bunch of excellent researchers took a look at the state of the field and made their own product. So I'm not too worried about relying on this team's info, but you should just have that context.

Fwiw, Mox (moxsf.com), run by Austin Chen, has installed a couple of Aerolamps and they were easy to set up and are running smoothly.

This is a cool post, though I think it's kind of annoying not to be able to see the specific numbers that one is putting them on without reading the chart. 

Load more