huw

1220 karmaJoined Working (0-5 years)Sydney NSW, Australia
huw.cool

Bio

Participation
2

I live for a high disagree-to-upvote ratio

Comments
173

Military applications of AI are not an idle concern. AI systems are already being used to increase military capacity by generating and analysing targets faster than humans can (and in this case, seemingly without much oversight). Palantir’s own technology likely also allows police organisations to defer responsibility for racist policing to AI systems.

Sure, for the most part, Claude will probably just be used for common requests, but Anthropic have no way of guaranteeing this. You cannot do this by policy, especially if it’s on Amazon hardware that you don’t control and can’t inspect. Ranking agencies by ‘cooperativeness’ should also be taken as lip service until they have a proven mechanism for doing so.

So they are revealing that, to them, AI safety doesn’t mean that they try to prevent AI from doing harm, just that they try to prevent it from doing unintended harm. This is a significant moment for them and I fear what it portends for the whole industry.

huw
16
3
0

If you’re inclined to defend Scott Alexander, I’d like to figure out where the crux is. So I’ll try and lay out some standards of evidence that I would need to update my own beliefs after reading this article.

If you believe Scott doesn’t necessarily believe in HBD, but does believe it’s worth debating/discussing, why has he declined to explicitly disown or disavow the Topher Brennan email?

If you believe Scott doesn’t believe HBD is even worth discussing, what does he mean by essentially agreeing with the truth of Beroe’s final paragraph in his dialogue on ACX?

For both, why would he review Richard Hanania’s book on his blog without once mentioning Hanania’s past and recent racism? (To pre-empt ‘he’s reviewing the book, not the author’, the review’s conclusion is entirely about determining Hanania’s motivation for writing it)

If you believe Scott has changed his positions, why hasn’t he shouted from the rooftops that he no longer believes in HBD / debating HBD? This should come with no social penalty.

I would set Julia Wise’s comments to Thorstad in this article as the kind of statement I would expect from Scott if he did not believe in HBD and/or the discussion of HBD.

huw
18
5
1

This is an awesome post, and it's a strong update in the direction of EV & CEA being much more transparent under your leadership. Very keen on hearing more from you in the future!


One other risk vector to EV stood out to me as concerning, but went somewhat unaddressed in this post. Consider:

EV was in a financial crisis; it had banked on receiving millions from FTX over the coming years

If a fraudulent or otherwise problematic individual hasn't been caught by the legal system, EV's donor due diligence tools may not catch them either.

I worry that the focus on legal risks is potentially missing a counterfactual here where a funding source is systematically upset. EV was not just banking on FTX to stay solvent / unfraudulent, but was also implicitly depending on cryptocurrency to remain frothy (the same can be said for EA, especially long-term risk cause areas, more broadly). Counterfactually, had FTX not been fraudulent, I still think that's it's likely that cryptocurrency would have collapsed over the following years. Assuming that the LTFF was receiving a proportion of FTX's funds, this still could've meant more than a 50% drop in funding from FTX (for example, Ethereum lost ~3/5ths of its market cap between November 2021 to November 2022).

You note:

Guardrails to prevent projects from running out of funding in a disorderly way and runway requirements to maintain resilience to possible future crises.

I would love to understand more about these financial controls. I can imagine that EV could probably withstand a sudden halving in funding from a major donor, by reallocating funding between projects, which is probably what's alluded to here.

(It's outside the scope of this post, but I'm not so sure that the broader long-term risk cause areas could have withstood this, and indeed, in the present scenario many organisations did not. I sort of worry about this kind of systematic risk with Anthropic, who could be hit quite hard if the current AI bubble starts winding down, even if they aren't directly responsible for it; I'm sure there are others.)

  1. Substack has a good recommendations algorithm, which will hopefully recommend people other EA relevant content (this feels complementary with the thing above, where it’s facilitating some cross-flow of users between our owned channels and substack)

This same recommendation algorithm, combined with their deliberately permissive approach to moderation, is the reason why a number of prominent publications have left Substack; they don’t want to be recommended to neo-Nazis or have neo-Nazis recommended to them. Here’s Casey Newton’s full explanation, which I think is reasonable and was well-received by his largely normal, largely popular audience. Despite Substack’s popularity in these circles, I think that it is directly valuable to not have these recommendation effects, as well as indirectly valuable from an optics perspective. (Regardless, I figured you might not be aware of this controversy at all).

In the same vein as Casey, I think that you could achieve almost all of the benefits outlined in that document with a different provider, such as Ghost.

huw
13
1
0

For those that have been following this: Is he serious, or is this just lip service and he's blocking it because he was lobbied by people in the tech industry?

Answer by huw6
1
0

During my last burnout, I realised that trying to push my working hours to the breaking point was making me substantially less happy (because of the pressure of that question—'could I be working right now?'), and substantially less productive (because without time to breathe, I was too focused on the wrong tasks). Cutting myself slack has really, genuinely improved the volume (quantity × quality) of value I produce. I don't think you need to accept a cop-out answer like 'just be happy', I believe that the conventional wisdom on knowledge and creative work is culturally over-moralised around 'hard work' (and particularly working hours in an American context), and isn't optimal for productivity. Just takes some time to shake it out of your system.

I can really recommend Four Thousand Weeks by Oliver Burkeman (or his Waking Up course), or Rework by Jason Fried & DHH for more pointers in this direction that have helped me 😌

I was wondering about the general conservative value around environmental conservation. Generally, I've noticed that some conservatives really seem to value nature itself (often, but not always, from a religious perspective—there's a wide range here), which I would have presumed could translate to a view of protecting animals as part of nature (rather than for the instrumental value of protecting against climate change that is more popular on the left). Why did this value not make it—is it just that U.S. conservatives need to protect & promote the agricultural industry, and directly opposing it won't fly?

I really do wonder to what extent the non-profit and then capped-profit structures were genuine, or just ruses intended to attract top talent that were always meant to be discarded. The more we learn about Sam, the more confusing it is that he would ever accept a structure that he couldn’t become fabulously wealthy from.

Just to check—is the 230mg target additive to sodium, or substitutive? I can imagine the interventions would look different if we merely had to fortify food vs start replacing sodium.

In general, I’m hugely in favour of EA considering this (and similar interventions like mandating/favouring sugar substitution). Health issues that face rich countries today are likely already facing poor countries in large quantities and will only get relatively worse as we solve other problems.

It seems like some of the biggest proponents of SB 1047 are Hollywood actors & writers (ex. Mark Ruffalo)—you might remember them from last year’s strike.

I think that the AI Safety movement has a big opportunity to partner with organised labour the way the animal welfare side of EA partnered with vegans. These are massive organisations with a lot of weight and mainstream power if we can find ways to work with them; it’s a big shortcut to building serious groundswell rather than going it alone.

See also Yanni’s work with voice actors in Australia—more of this!

Load more