Ben Millwood🔸

4424 karmaJoined

Participation
3

  • Attended an EA Global conference
  • Attended an EAGx conference
  • Attended more than three meetings with a local EA group

Comments
504

Topic contributions
1

A duckduckgo search[1] for my name turns up the EA Forum as a second[2] result, so I think it's pretty easy for future employers to find what I write here and take it into consideration, even if they don't think the venue is important in itself.

[1]: Google search would be more relevant but I expect that to be more distorted by Google's knowledge of what I typically search for, whereas I guess duckduckgo will show me something more similar to what it would show other people

[2]: weirdly also the third and fourth

credit to AGB for (in this comment) reminding me where to find the Scott Alexander remarks that pushed me a lot in this direction:

Second, if we never get AI, I expect the future to be short and grim. Most likely we kill ourselves with synthetic biology. If not, some combination of technological and economic stagnation, rising totalitarianism + illiberalism + mobocracy, fertility collapse and dysgenics will impoverish the world and accelerate its decaying institutional quality. I don’t spend much time worrying about any of these, because I think they’ll take a few generations to reach crisis level, and I expect technology to flip the gameboard well before then. But if we ban all gameboard-flipping technologies (the only other one I know is genetic enhancement, which is even more bannable), then we do end up with bioweapon catastrophe or social collapse. I’ve said before I think there’s a ~20% chance of AI destroying the world. But if we don’t get AI, I think there’s a 50%+ chance in the next 100 years we end up dead or careening towards Venezuela. That doesn’t mean I have to support AI accelerationism because 20% is smaller than 50%. Short, carefully-tailored pauses could improve the chance of AI going well by a lot, without increasing the risk of social collapse too much. But it’s something on my mind.

(emphasis mine)

My original shortform tried to be measured / neutral, but I want to also say I found this passage very alarming when I first read it, and it's wildly more pessimistic than I am by default. I think if this is true, it's really important to know, but I made my shortform because if it's false, that's really important to know too. I hope we can look into it in a way that moves the needle on our best understanding.

Does this mean everything that we used to call "10x cash" we're now calling "3x cash"?

very inconsiderate of GiveDirectly to disrupt our benchmarks like this 😛

I see a dynamic playing out here, where a user has made a falsifiable claim, I have attempted to falsify it, and you've attempted to deny that the claim is falsifiable at all.

My claim is that the org values your time at a rate that is significantly higher than the rate they pay you for it, because the cost of employment is higher than just salary and because the employer needs to value your work above its cost for them to want to hire you. I don't see how this is unfalsifiable. Mostly you could falsify them by asking orgs how they think about the cost of staff time, though I guess some wouldn't model it as explicitly as this.

They do mean that we're forced to estimate the relevant threshold instead of having a precise number, but a precise wrong number isn't better than an imprecise (closer to) correct number.

Notice that we are discussing a concrete empirical data point, that represents a 600% difference, while you've given a theoretical upper bound of 100%. That leaves a 500% delta.

No, if you're comparing the cost of doing 10 minutes of work at salary X and 60 minutes of work compensated by Y, but I argue that salary X underestimates the cost of your work by a factor of 2, your salary now only needs to be more than 3 times larger than the work trial compensation, not 5 times.

When it comes to concretising "how much does employee value exceed employee costs", it probably varies a lot from organisation to organisation. I think there are several employers in EA who believe that after a point, paying more doesn't really get you better people. This allows their estimates of value of staff time to exceed employee costs by enormous margins, because there's no mechanism to couple the two together. I think when these differences are very extreme we should be suspicious if they're really true, but as someone who has multiple times had to compare earning to give with direct work, I've frequently asked an org "how much in donations would you need to prefer the money over hiring me?" and for difficult-to-hire roles they frequently say numbers dramatically larger than the salary they are offering.

This means that your argument is not going to be uniform across organisations, but I don't know why you'd expect it to be: surely you weren't saying that no organisation should ever pay for a test task, but only that organisations shouldn't pay for test tasks when doing so increases their costs of assessment to the point where they choose to assess fewer people.

My expectation is that if you asked orgs about this, they would say that they already don't choose to assess fewer people based on cost of paying them. This seems testable, and if true, it seems to me that it makes pretty much all of the other discussion irrelevant.

So my salary would need to be 6* higher (per unit time) than the test task payment for this to be true.

Strictly speaking your salary is the wrong number here. At a minimum, you want to use the cost to the org of your work, which is your salary + other costs of employing you (and I've seen estimates of the other costs at 50-100% of salary). In reality, the org of course values your work more highly than the amount they pay to acquire it (otherwise... why would they acquire it at that rate) so your value per hour is higher still. Keeping in mind that the pay for work tasks generally isn't that high, it seems pretty plausible to me that the assessment cost is primarily staff time and not money.

For Pause AI or Stop AI to succeed, pausing / stopping needs to be a viable solution. I think some AI capabilities people who believe in existential risk may (perhaps?) be motivated by the thought that the risk of civilisational collapse is high without AI, so it's worth taking the risk of misaligned AI to prevent that outcome.

If this really is cruxy for some people, it's possible this doesn't get noticed because people take it as a background assumption and don't tend to discuss it directly, so they don't realize how much they disagree and how crucial that disagreement is.

Scope insensitivity has some empirical backing -- e.g. the helping birds study -- and some theorised mechanisms of action, e.g. people lacking intuitive understanding of large numbers.

Scope oversensitivity seems possible in theory, but I can't think of any similar empirical or theoretical reasons to think it's actually happening.

To the extent that you disagree, it's not clear to me whether it's because you and I disagree on how EAs weight things like animal suffering, or whether we disagree on how it ought to be weighted. Are you intending to cast doubt on the idea that a problem that is 100x as large is (all else equal) 100x more important, or are you intending to suggest that EAs treat it as more than 100x as important?

While My experience at the controversial Manifest 2024 (and several related posts) was (were) not explicitly about policies or politicians, I think it's largely the underlying political themes that made it so heated.

I have a broad sense that AI safety thinking has evolved a bunch over the years, and I think it would be cool to have a retrospective of "here are some concrete things that used to be pretty central that we now think are either incorrect or at least incorrectly focused"

Of course it's hard enough to get a broad overview of what everyone thinks now, let alone what they used to think but discarded.

(this is probably also useful outside of AI safety, but I think it would be most useful there)

I feel like my experience with notifications has been pretty bad recently – something like, I'll get a few notifications, go follow the link on one, and then all the others will disappear and there's no longer any way to find out what they were. Hard to confidently replicate because I can't generate notifications on demand, but that's my impression.

Load more