Nathan Young

Product Management @ Forecasting Consultancy
14606 karmaJoined May 2019Working (0-5 years)London, UK

Bio

Participation
4

Create prediction markets and forecasting questions on AI risk and biorisk. I also work part-time at a prediction market.

Use my connections on Twitter to raise the profile of these predictions and increase the chance that decision-makers discuss these issues.

How others can help me

Talking to those in forecasting to improve my forecasting question generation tool

Writing forecasting questions on EA topics.

Meeting EAs I become lifelong friends with.

How I can help others

Connecting them to other EAs.

Writing forecasting questions on metaculus.

Talking to them about forecasting.

Sequences
1

Moving In Step With One Another

Comments
2198

Topic contributions
19

Interesting take. I don't like it. 

Perhaps because I like saying overrated/underrated.

But also because overrated/underrated is a quick way to provide information. "Forecasting is underrated by the population at large" is much easier to think of than "forecasting is probably rated 4/10 by the population at large and should be rated 6/10"

Over/underrated requires about 3 mental queries, "Is it better or worse than my ingroup thinks" "Is it better or worse than my ingroup thinks?" "Am I gonna have to be clear about what I mean?"

Scoring the current and desired status of something requires about 20 queries "Is 4 fair?" "Is 5 fair" "What axis am I rating on?" "Popularity?" "If I score it a 4 will people think I'm crazy?"...

Like in some sense your right that % forecasts are more useful than "More likely/less likely" and sizes are better than "bigger smaller" but when dealing with intangibles like status I think it's pretty costly to calculate some status number, so I do the cheaper thing.

 

Also would you prefer people used over/underrated less or would you prefer the people who use over/underrated spoke less? Because I would guess that some chunk of those 50ish karma are from people who don't like the vibe rather than some epistemic thing. And if that's the case, I think we should have a different discussion.

I guess I think that might come from a frustration around jargon or rationalists in general. And I'm pretty happy to try and broaden my answer from over/underrated - just as I would if someone asked me how big a star was and I said "bigger than an elephant". But it's worth noting it's a bandwidth thing and often used because giving exact sizes in status is hard. Perhaps we shouldn't have numbers and words for it, but we don't.

My base rate is that people who are found guilty of crimes are probably guilt of them. 

I don't believe anyone knew about Alameda's additional $8b liability to FTX via the fiat@ account until summer-fall 2022, SBF included.

I do believe they knew about it. Most people do know about such things. 

I haven't come up with a much better course of action than they took when they discovered this mistake.

Well Alameda could have not had a balance sheet mostly composed of FTT. And FTX could have not lent alameda the money in the first place. FTX could have admitted as soon as they found the mistake.

The funds Alameda otherwise borrowed from FTX came entirely from the margin lending program, which was permissible under the Terms Of Service.

This feels like exactly the kind of technicality that a trial would spot.

I don't feel like getting into all this again, but a) it feels like you take a very different prior to me and b) I don't find your reasoning finds some huge amount of new information. Clients did not expect their money to be lent to a hedge fund. That breaks norms. It doesn't seem surprising to me that it was illegal too. 

Surely they don't pay the lawyers. Surely after the debtors are paid they pay the shareholders?

Seems really good, though I didn't read it fully. I liked it even before I realised you'd written it. 

 

Minor disagreement:

In short, it’s capitalism versus humanity.

I guess I'd prefer something like "it's optimisation versus humanity" or "it's unfettered capitalism versus humanity". Capitalism is a good servant but a bad master and I agree that if we just optimise for GDP we'll probably end in real trouble. But capitalism as commonly understood is really good.

Also this felt a little out of tone with the article and like you wanted some lefty credentials. Would you actually defend this point?

So I asked my friend who runs training at universities on this topic and they said that at one university it appeared that way for a while, which is moderately weaker than what I said. So I got that wrong.

But that still works as an example. There was a real world place where things were worse than here.

Some thoughts:

  • Seems this situation could have been handled worse.
    • When I was a christian several churches I was part of had serial sexual harassment by powerful men that was discovered years later. This seems unlikely in EA
    • My friends in political communities imply sexual harassment is rife
    • Cotton-Barratt could have been thrown out without any possibility of discussion. I am reliability told this is the policy of some UK universities.
    • Lesswrong user Mingyuan writes usefully on how difficult due process is here
  • This topic feels costly to discuss. It is my general view that communities need to be able to have discussions on what their norms should be. I don't think EA has really been capable of that here. 
  • When did the worst of these cases happen? If the worst cases happened 5 years ago, that seems materially different than if they happened 1 year ago. eg, where does the majority of the 'predictable harm' fall on the timeline? Likewise, Cotton-Barratt has been de fact ostracised for months now. Seems like this should count towards the ban time.
  • To what extent is this about Owen and to what extent is this as an example? I would softly hold that some parts of this decision are to show that even powerful people should be held accountable. I think we should be honest with ourselves if this is the case.
  • "After February 11th, 2025, Owen Cotton-Barratt will need to appeal to the boards of EV US and EV UK to participate in any overnight events or to be a member in an EV coworking space." I note there is no path for Cotton-Barratt to become a typical member of the community again. Personally I guess I want this for restorative justice. Such standards might be very high, but some notion of community forgiveness/normalisation is important to me.

Some context around the 5 domains (I think):

What are the Five Domains? The Five Domains framework is a way of thinking about animal welfare – going beyond just eliminating or minimising negative experiences to achieve a neutral state of animal welfare and encouraging positive experiences in four functional domains, with the idea being an output is the fifth domain, a mental domain. To determine an animal’s wellbeing the following should be considered: Nutrition Environment Health Behavioural interactions Mental State For every physical experience an animal has, there may also be an effect on their mental wellbeing.

link here: https://www.rspcaqld.org.au/blog/animal-welfare/the-five-domains#:~:text=The Five Domains framework is,fifth domain%2C a mental domain.

What process does the RSPCA use to decide what their top priorities are?

Naah I think I still disagree. I guess the median large consultancy or legal firm is much more likely to go after you for sharing stuff than than the median small business. Because they have the resources and organisational capital to do so, and because their hiring allows them to find people who probably won't mind and because they capture more of the downside and lose less to upside. 

I'm not endorsing this but it's what I would expect from Rethink, OpenPhil, FTX, Manifold, 80k, Charity Entreprenurship, Longview, CEA, Lightcone, Miri. And looking at those orgs, It's what, Lightcone and Manifold that aren't normal to secretive in terms of internal information. Maybe I could be convinced to give MIRI/Longivew a pass because their secrecy might be for non-institutional reasons but "organisations become less willing for random individuals to speak their true views about internal processes as they get larger/more powerful" seems a reasonable rule of thumb, inside and outside EA.

Load more