Nathan Young

Project manager/Director @ Frostwork (web app agency)
15434 karmaJoined Working (6-15 years)London, UK



Builds web apps (eg and makes forecasts. Currently I have spare capacity. 

How others can help me

Talking to those in forecasting to improve my forecasting question generation tool

Writing forecasting questions on EA topics.

Meeting EAs I become lifelong friends with.

How I can help others

Connecting them to other EAs.

Writing forecasting questions on metaculus.

Talking to them about forecasting.


Moving In Step With One Another


Topic contributions

Interesting take. I don't like it. 

Perhaps because I like saying overrated/underrated.

But also because overrated/underrated is a quick way to provide information. "Forecasting is underrated by the population at large" is much easier to think of than "forecasting is probably rated 4/10 by the population at large and should be rated 6/10"

Over/underrated requires about 3 mental queries, "Is it better or worse than my ingroup thinks" "Is it better or worse than my ingroup thinks?" "Am I gonna have to be clear about what I mean?"

Scoring the current and desired status of something requires about 20 queries "Is 4 fair?" "Is 5 fair" "What axis am I rating on?" "Popularity?" "If I score it a 4 will people think I'm crazy?"...

Like in some sense your right that % forecasts are more useful than "More likely/less likely" and sizes are better than "bigger smaller" but when dealing with intangibles like status I think it's pretty costly to calculate some status number, so I do the cheaper thing.


Also would you prefer people used over/underrated less or would you prefer the people who use over/underrated spoke less? Because I would guess that some chunk of those 50ish karma are from people who don't like the vibe rather than some epistemic thing. And if that's the case, I think we should have a different discussion.

I guess I think that might come from a frustration around jargon or rationalists in general. And I'm pretty happy to try and broaden my answer from over/underrated - just as I would if someone asked me how big a star was and I said "bigger than an elephant". But it's worth noting it's a bandwidth thing and often used because giving exact sizes in status is hard. Perhaps we shouldn't have numbers and words for it, but we don't.

Sometimes I try to imagine the friend of mine I would be least surprised to have very poor mental health or to be suicidal and then I see how that person is doing. 

Seems like bad behaviour from Altman (though not terribly surprising). 

I doubt I'll comment much on this publicly because I doubt I have much to add. I think there is a risk of overextension here - this seems like dumb/bad behaviour but isn't as harmful as the NDA stuff. I think it would be easy to stop being focused on "are OpenAI being good stewards of AI?" to "we sneer whenever altman makes a mistake". I think that would be a bad transition. 

For those who don't know, Matt Bruenig is a large and well-known twitter account. It would be easy to fact check this and very likely to be found out if it was false, so it is probably true. 

Where is that image from? Is there a colour version? I might share it if there were.

I've recently been reading Thresholder book 4 by Alexander Wales which exists in a functional socialist world. I think it's easier to empathise with this case when I'm reading about a place where everyone gets enough food, clothes, furniture and housing. 

Some thoughts on this:

  1. The global economy’s current mode of allocating resources is suboptimal. (Otherwise, why would effective altruism be necessary?)

I am not sure most EAs think this. I think they think that we are bad at allocating resources to poorer people/animals. Charity is flawed because the people paying and the people and beings benefitting are different. Which ruins the incentives. In some sense foundations are already a more command means of allocating money. But it doesn't follow that the whole economy should be.

And some socialist economies have had some successes (human development in Kerala, economic growth in China, the USSR’s role in space technology and smallpox eradication, Cuba’s healthcare system)

This is the article I want to read, FYI. If I were more confident on these points, then I would be more in agreement overall. If you want, I'd happily have a google doc on these points and attempt to make a blog later, but I really don't know much about it. But I'd like to learn more!

In addition, socialist influence or pressure has played a vital part in reforms within capitalism – such as the expansion of public services, redistribution and decolonisation – which have almost certainly been positive for welfare.

This is disputed. Some claim that these things would have happened as people got wealthier. Personally I don't know. 

Firstly, EAs should be more willing to fund and conduct research into alternative economic systems, socialist ones included.

Yes in theory, but how can this be done without grift/vaccuousness. I like the charter city institute trying to set up charter cities. I'd prefer a commune trying to scale itself rather than research I wouldn't trust when it was done.

For instance, there is much recent socialist literature on climate change[8], on movement-building and strategy[9], and on the economics of the future[10]. Engagement with socialist thought, especially socialist critiques of capitalism, might have woken EAs up to the dangers associated with dependence upon billionaires and the current competitive race to AI at an earlier date.

Yes I have some time for this, though it seems like the first people I'd expect to do this kind of translation work are socialist EAs of which there are a few. I sort of don't get why they aren't most passionate about translating their learnings into EA language. I mean, I do that with forecasting and some ex-christian learnings. 

Thirdly, EAs who want to have impact through politics should regard socialists as natural allies.

This is not my experience of socialists. I agree with them on a number of things, but often I get the sense that I am not nearly good enough. I know people who won't come to my house because my housemate (not an EA) is a Tory. Generally they come across to me as people who want perfection before they'll deal with me and I doubt I'll make the vibe check.

But partly it is due to effective altruism’s proximity to capitalists.

I mean, yes and as above I think that some skepticism about billionaires is wise, but that said, on balance 10,000s or more lives have been saved due to the donations of billionaires and many of the changes on graphs below seem coincident with markets. I guess I think that if you don't give a good chunk of credit to the allocative efficiency of markets and their wealth creation then you will get wrong answers elsewhere, as I think many socialists do.


Poverty - Our World in Data

And that's before we get to socialist errors on climate change, which currently many seem to be making worlse.


Again, I think that threaded discussion is the right way to go here. I'd prefer to have a google doc than discuss here (or if you want to, perhaps respond to a specific point and we can discuss them 1 by 1). 

I hope you're well. 

Worth noting that this is less than many would have if their assets had been locked. Though compared to many assets 18% return in 19 months isn't bad. 

I tried to find errors in this article and wasn't able. Though I didn't try that hard. 

I am not sure I want to tweet it, since it seems very adversarially selected. But of similar magnitude is the issue that people quote torres as if they are a reasonable source when they aren't.

I wish someone would write a more balanced article. I don't think torres was being racist to boghossian's kid, for instance. I think they were harassing boghossian though, and making light of jokes about the kid. And that's bad enough. 

I find it hard to know what one should do over the long term about a badly behaved actor criticized by other badly behaved actors.

Load more