ElliotJDavies

1956 karmaJoined Copenhagen, Denmark

Posts
5

Sorted by New

Comments
231

I see a dynamic playing out here, where a user has made a falsifiable claim, I have attempted to falsify it, and you've attempted to deny that the claim is falsifiable at all. 

I recognise it's easy to stumble into these dynamics, but we must acknowledge that this is epistemically destructive.

Strictly speaking your salary is the wrong number here.


I don't think we should dismiss empirical data so quickly when it's brought to the table - that sets a bad precedent. 
 

other costs of employing you (and I've seen estimates of the other costs at 50-100% of salary

I can also provide empirical data on this if that is the crux here?

Notice that we are discussing a concrete empirical data point, that represents a 600% difference, while you've given a theoretical upper bound of 100%. That leaves a 500% delta. 

Keeping in mind that the pay for work tasks generally isn't that high

Would you be able to provide any concrete figures here?

In reality, the org of course values your work more highly than the amount they pay to acquire it

I view pointing to opportunity cost in the abstract as essentially an appeal to ignorance
 

Not to say that opportunity costs do not exist, but you've failed to concretise them in a way, and that makes it hard to find the truth here.

I could make similar appeals to ignorance in support of my argument, like the idea the benefit of getting a better candidate is very high, as candidate performance is fat-tailed ect. - but I believe this is similarly epistemically destructive. If I were to make a similar claim, I would likely attempt to concretise it. 

Completed this, but was difficult! 

It takes a significant amount of time to mark a test task. But this can be fixed by just adjusting the height of the screening bar, as opposed to using credentialist and biased methods (like looking at someone's LinkedIn profile or CV). 

 

My guess is that, for many orgs, the time cost of assessing the test task is larger than the financial cost of paying candidates to complete the test task

This is an empirical question, and I suspect is not true. For example, it took me 10 minutes to mark each candidates 1 hour test task. So my salary would need to be 6* higher (per unit time) than the test task payment for this to be true. 

 

I also think the justice-implications of compensating applicants are unclear (offering pay for longer tasks may make them more accessible to poorer applicants)

This is a good point. 

I'd be curious to know the marginal cost of an additional attendee - I'd put it between 5-30 USD, assuming they attend all sessions. 

Assuming you update your availability on swapcard, and that you would get value out of attending a conference, I suspect attending is positive EV. 

Paying candidates to complete a test task likely increases inequality, credentialism and decreases candidate quality. If you pay candidates for their time, you're likely to accept less candidates and lower variance candidates into the test task stage. Orgs can continue to pay top candidates to complete the test task, if they believe it measurably decreases the attrition rate, but give all candidates that pass an anonymised screening bar the chance to complete a test task.

I read Saul's comment to be discussing two different events. 1 event he was uninvited to, the other he would have been able to attend if he would have so wished. 

potential employers, neighbors, and others might come across it

I  think saying "I am against scientific racism" is within the overton window, and it would be extraordinarily unlikely to be"cancelled" as a result of that. This level of risk aversion is straightforwardly deleterious for our community and wider society. 

While I'm cognizant of the downsides of a centralized authority deciding what events can and cannot be promoted here, I think the need to maintain sufficient distance between EA and this sort of event outweighs those downsides.


Can I also nudge people to be more vocal when they perceive there to a problem? I find it's extremely common that when a problem is unfolding nobody says anything. 

Even the post above is posted anonymously. To me, I see this as being part of a wider trend where people don't feel comfortable expressing their viewpoint openly, which I think is not super healthy. 

Sentient AIAI Suffering.  

Biological life forms experience unequal (asymmetrical) amounts of pleasure and pain. This asymmetry is important. It's why you cannot make up for starving someone for a week by giving them food for a week. 

This is true for biological life, because a selection pressure was applied (evolution by natural selection). This selection pressure is necessitated by entropy, because it's easier to die than it is to live. Many circumstances result in death, only a narrow band of circumstances results in life. Incidentally, this is why you spend most of your life in a temperature controlled environment. 

The crux: there's no reason to think that a similar selection effect is being applied to AI models. LLMs, if they were sentient, would be equally as likely to enjoy predicting the next token as to dislike predicting the next token. 

Load more