DT

David T

1457 karmaJoined

Comments
267

It's a niche property with very few potential buyers and they probably overpaid. The UK property market has cooled a little and isn't necessarily as attractive to the sort of oligarch or hospitality company most likely to buy it. It will also have a very high annual maintenance bill due to its age; it's possible they found more work which needed doing which hit the value and hanging onto it would have had a non-trivial cost regardless. It's also possible they clawed some money back from selling off some parts separately (there was an apartment in a Wytham Abbey outbuilding for sale last year, though I'm not sure EA ever owned that building).

It's not particularly unusual for buyers of niche high value property to make large losses when under a little pressure to sell, especially if they bought into the property's sentimental appeal rather than its costs. The people that very confidently dismissed the idea that Wytham would be a losing venture because don't you guys know what a capital investment is understood the market dynamics and operational costs less well than many of the critics.

I mean, if you want to buy up a pretty hotel with a bus service to Oxford, you can get more bedrooms for less money than Wytham was sold for, never mind what it was bought for https://www.rightmove.co.uk/properties/739022765321024#/?channel=COM_BUY  

What's labelled Asimov's Corollary here is actually Parkinson's Law

Asimov's Corollary, which is pretty neat but completely different and not nearly as pithy, is explained here. As a fan of Clarke's laws, I'm sure Peter likes that one too.
 

-


I also haven't seen or heard anyone refer to Mandelson's law before? Although the quote is certainly ironic in the context of UK politics (Peter Mandelson is a political figure in the headlines for his third resignation in disgrace having been brought back into the fold 25 years after his last one; the struggling Prime Minister is probably sick of trying to explain his appointment to an unreceptive public by now....)

It also feels like smugglers helping regular smokers get discounts on their habit are the wrong model, since the target for the ban is young people who generally don't [yet]have a smoking addiction. Basically everyone else buys cigarettes legally in convenience stores and teenagers already barely smoke them with the trend being steeply downwards since the turn of the century. Kids who don't have a smoking habit and increasingly aren't interested in trying are barely a demand factor for underground cigarettes, especially since they can also obtain them by asking an older person to purchase it in a regular convenience store, same as 15 year olds wanting to experiment with cigarettes and alcohol have done for years. .

It's just it places more of a barrier to them getting it regularly in quantities likely to become habit-forming, and ultimately apart from being highly addictive when people do that tobacco doesn't have much appeal as a drug, offering minimal high and being something grandma smokes and teen idols don't, so it really doesn't seem like something that a few years down the line is going to result in speakeasies full of chainsmoking twentysomethings or a new sideline for dealers in cocaine.

I can understand that prediction markets trending towards sports gambling was overlooked by the sort of people who (i) were arguing about the creation of new markets for stuff that didn't already have an abundance of well-marketed gambling options and (ii) generally weren't personally interested in betting on sports.

But lack of debate at the time about preying on people with weak impulse control surprises me, given that prediction markets are by design zero-sum games (like binary options trades, which have long been dominated by boiler room outfits preying on people with weak impulse control, and are regulated equivalently to sports gambling in many jurisdictions) rather than trades between entities with different liquidity needs or risk tolerance. Even if (or especially if) some of the market participants are superforecasters whose data-driven approach makes their assessments worth paying attention to, they still need punters to win money off and most bets offered simply aren't a good hedge for anything.

Think this is a good post. But the specific example below feels like so much of a mischaracterization of typical debate it might actually illustrate part of the problem.

The problem with many of the naive objections to EA is that they are not matters of opinion. A favorite objection of the uninitiated is "but how can you KNOW that doing X is better than doing Y? I don't think it's possible to know, so you can choose arbitrarily between X and Y." It is not a matter of opinion, for example, that giving $50,000 to train a single seeing eye dog is less effective at combating blindness than curing literally 1,000 people of cataracts.

Whilst you may have experienced specific counterexamples, I think relatively few objectors to EA are questioning the principle that restoring the sight of 1000 people might plausibly be agreed to be a preferable outcome to moderately aiding one blind person[1]  as opposed to generalising from this to what EA actually is, which is a much bolder set of claims and priorities than "sometimes you can help more people achieve the same ends with the same amount of money"

tbh even if they are explicitly asking an EA to state something as fundamental as how they can treat helping 1000 blind people > 1 blind person as established fact, if they're someone who's ever had a conversation with an EA before there's a decent chance they're doing so to ward off the EA promptly substituting 2000 chickens or priors convenient imputed probability claims into the equation! And, in general, I think people asking why EA is so convinced it's so much better at prioritising and insisting philanthropic choices are arbitrary aren't doing so because they don't think cost benefit analysis can be done in any circumstances[2] they're asking because of how many other assumptions are necessarily smuggled in to even attempt cause neutrality (or because they think the assumptions smuggled in are wrong,[3] or just not actually that neutral). 

Obviously this isn't because EAs refuse to debate their assumptions at all. Indeed they love nothing more than respectful debate with people who like to state their own priors and utility ranges, and even someone questioning the validity of the whole EA utilitarian consequentialist framework might get entertained so long as they establish themselves as being suitably intellectual if they contextualise it by explaining their own ethical framework first. But there's a strong tendency to assume that if people don't come armed with the right jargon and specific alternatives or at least have the decency to state that their question is axiological they must just not understand anything, or actually reject the whole edifice of science.[4] I think even when delivered without a hint of condescension that sort of talking past people is more annoying than the bluntness[5]

-

Similarly I don't think advising people midway through medical school to retrain as AI researchers is a form of fanaticism which is just unrealistic enough to be annoying (though it's an excellent example of unrealistic enough to be annoying fanaticism). It's also a fundamental error based on evaluating outsider professions differently from the recommended ones: completely forgetting that if we're dismissing the impact of choosing to become a doctor by comparing them with the counterfactual of the next best applicant like 80k hours says, we should we should treat the impact of the alternative career the same way. Which means the impact of a couple of medical students career changing to the even more competitive profession of AI safety should be measured by how much more likely they are to solve AI alignment than the multiple CompSci graduates who've been obsessing over it for five years applying for the same positions. Even if they somehow find a job in that field they're unlikely to tangibly change it[6].  It's not just the fanaticism producing recommendations that are annoying or too late, it's also that cargo culting the arguments produces recommendations that are plain bad.[7] 

 

  1. ^

    There are obviously also explicit arguments for preferring to help one person over 1000s far away based on axiological assumptions about duty to particular communities or individuals, but yeah, those arguments don't tend to be made implicitly in the form of a question....

  2. ^

    again, people (including EAs) sometimes have arguments against the applying it to particular fields and evidence bases which are really good but they tend to explicitly make them

  3. ^

    there are lots of epistemological questions about recommendations that have a better answer than "don't you know the difference between opinion and fact?" too: the better EA analyses are explicit about their evidence base and the extent to which it represents a knowledge claim and their level of confidence X actually does deliver Z more cost effectively than Y

  4. ^

    I'm not saying bad questions don't exist, I'm saying that reasonable ones often get framed as unreasonable ones because lecturing people about how cost benefit analysis is just maths and they surely aren't arguing with maths is easier than justifying the use of "moral weights"

  5. ^

    tbh online at least I think EA's style of debate is more likely to be accused - fairly and otherwise - of annoyingly polite persistence

  6. ^

    nor will the modal AI researcher...

  7. ^

    tbf the other default EA recommendation of doing medical research instead probably isn't a bad suggestion in this instance. Or maybe they should study how to use AI to do medical research to make everyone happy ;)

  8. Show all footnotes

I think the main benefit of applying to jobs you don't think you'll get remains the possibility you might actually get them.

If you're applying for feedback you'll usually be disappointed though. Most organizations offer little feedback due to a mix of caution, other things to do and the tendency of "honestly you were pretty OK, the other candidates just stood out as better in a mix of different ways" to be the actual answer, which isn't really actionable.

Credulous really is the right word. There is a strand of dialogue in EA circles that feels like “we called much of this many years ago” therefor “everything that transpires will mimic our thought experiments perfectly.” The marketing from frontier labs is the offspring of early EA/LW ideas. The potential for confirmation bias here is astronomical.

I also think that when it comes to assessing whether they're overly trusting of the claims of frontier labs because it fits their broader views, it's probably more relevant that EAs generally believed Altman and Musk when they said they were founding OpenAI to do philanthropic research when basically everybody else understood what they were really trying to do than that EAs correctly called transformers being a big deal when the average computer scientist was a bit more cautious.

GPT2 was "too dangerous to release" as a marketing strategy too.

I think human responses to pain are more complex than just thresholds too. In addition to your gentle scratching, or the near-zero amount of pain sensations registered when applauding, humans voluntarily inflict non-trivial amounts of pain upon themselves for entertainment and self-worth, and accept inevitability of pain as part of their hobbies. Is it meaningful to argue that a certain number of tattoo artists is more objectionable than a smaller number of puppy kickers, or that selecting someone to play football for 90 minutes which will aggravate their modest limb pain is morally more dubious than intentionally kicking them on that limb? I don't think so. 

And that's not to mention the benefits of pain which such discussions ignore. For many millenia, one of the most feared diseases was leprosy, which manifests itself in numbness

When it gets to complex organisms and torture much of it is measured as psychological responses rather than intensity of nervous sensation too, which also doesn't align well with a neat cardinal pain scale; techniques like waterboarding actually trigger objectively useful reflexes rather than inflicting neurological pain. Is there a certain amount of cumulative experience of unexpected droplets in the nasal system aggregated across many people that approximates a single instance of waterboarding one individual? I would say no. I'd probably also say that there isn't a certain amount of aggregated BSDM kink that's worse than one actual torture chamber...

I'd go further and say that the FTX-ish reputation "EA is where extremely wealthy Silicon Valley nerds brag about their generosity whilst mostly funnelling money to people like them and using it as an avenue for self-promotion" also attracts the wrong sort of people - before there were people complaining about FTX being a scam, there were people complaining about the perceived ease of getting funding by FTX attracting the insincere.

(Other negative EA stereotypes contribute to putting well-intentioned people off, but I'm not sure they actually attract the wrong people)

I'm not sure what you're imagining here. If you give people a trolley problem (only via text) and say on one track, there's a dog and on the other one, there's a computer program Eliza and they can chat to either, most would choose to save the dog, even if its only text output were "whoof whoof".

What I'm imagining, which I evidently didn't make clear enough, is not a trolley problem but simply trying to discern whether something else is conscious without knowing whether it has "faces, limbs, fur and a squishy body", such as reading its output [if any] over a remote computer terminal.[1] In these circumstances, not only will humans be unable to find any grounds for empathy with almost all sentient beings, but they will find plenty of grounds to empathise with or at least attribute motivation and intent to software programs.[2] So in the absence of context there's definitely a bias in mind attribution towards symbol manipulators; even trivially simple ones that merely mimic or perform arithmetic.

On the other hand it seems like the "faces, limbs, fur and a squishy body" are actually a relatively useful heuristic, especially since adults are seldom deceived by taxidermy or cuddly toys in comparison to how easily they're impressed by "cheap parlour trick" level AI

Are people more likely to empathise with entities with "faces, limbs, fur and a squishy body" than disembodied entities with apparent facility with symbol manipulation? Possibly, though I think this varies,[3] but the relevant question is: are people more likely misplace their empathy in imputing basic consciousness to other mammals or imputing heightened consciousness to anything that can beat them at chess

The level of misattribution matters too. Our sense of empathy anthropomorphises dogs by overestimating their grasp of language and underestimating the extent to which they are motivated by smell, and anthropomorphises irritating repetitive hardcoded chatbots by assigning meaning and motivation which simply doesn't exist.

Most non-dualists would say consciousness is a feature of information processing (functionalists, illusionists, non-reductive materialists) or something as fundamental as physics (Russelian monism, pan(proto)psychism)...The phrase "rooted in [biochemical processes]" is the least controversial but it still connotes something most might not endorse - i.e. that biology and chemistry is the correct category or level of description

The rooted in biochemical processes is the bit I'm aiming for here; I am not aware of a non-dualist theory which roots human cognition in something other than the biochemical processes of the body (I don't think the biochemical processes of the body themselves particularly care whether philosophers of mind label them as the consequence of evolutionary imperatives, physics, function, or illusion.)[4] Perhaps I can only be fully confident of my own consciousness, but its relationship with my biochemistry and physiology does at least comes with a bunch of hypotheses originally tested on similar organisms (albeit many of those hypotheses I'd rather not test ...). 

 

  1. ^

    Like Turing's eponymous test, only not explicitly a test. Eliza might convince a human not primed to look for evidence it's just a shoddy computer program that its outputs represent a stream of conscious thought; nobody's going to try to follow the strains of thought in a dog's typing...

  2. ^

    including those [near]-universally agreed to be too simple to have any sort of motivation, intent or consciousness

  3. ^

    No shortage of people who have developed feelings for ChatGPT, and I bet most of them eat cute farm animals

  4. ^

    or indeed if between 93% and 99.9% of them live lifestyles too chaste or avant garde to acknowledge the possibility that changes in their hormone balance and neurological state associated with [the prospect of] sex might be linked to evolutionary imperatives to reproduce ;-)

  5. ^

    I can experimentally verify claims made about how certain changes to my biochemistry or physiology would affect my consciousness, though in most cases I'd rather not :)  

  6. Show all footnotes
Load more