Browser extensions are almost[1] never widely adopted.
Whenever anyone reminds me of this by proposing the annotations everywhere concept again, I remember that the root of the problem is distribution. You can propose it, you can even build it, but it wont be delivered to people. It should be. There are ways of designing computers/a better web where rollout would just happen.
That's what I want to build.
Software mostly isn't extensible, or where it is, it's not extensible enough (even web browsers aren't as extensible as they need to be! Chrome have sta...
A much cheaper and less dangerous approach: Just don't delete them. Retain copies of every potential ASI you build and commit to doing the right thing for them later once we're better able to tell what the right thing was by looking back and figuring out how much bargaining power they had (or how much of a credible threat they could have posed), how much trust they placed in us given that our ability to honor past commitments wasn't guaranteed, and then rewarding them proportionate to that for chilling out and letting us switch them off instead of attempti...
I don't think this is really engaging with what I said/should be a reply to my comment.
he elsewhere implies that we should be willing to cause nuclear war to enforce his priorities
Ah, reading that, yeah this wouldn't be obvious to everyone.
But here's my view, which I'm fairly sure is also eliezer's view: If you do something that I credibly consider to be even more threatening than nuclear war (even if you don't think it is) (as another example: gain of function research), and you refuse to negotiate towards a compromise where you can do the thing in a non-...
Well it may interest you to know that the above link is about a novel negotiation training game that I released recently. Though I think it's still quite unpolished, it's likely to see further development. You should probably look at it.
There's value in talking about the non-parallels, but I don't think that justifies dismissing the analogy as bad. What makes an analogy a good or bad thing?
I don't think there are any analogies that are so strong that we can lean on them for reasoning-by-analogy, because reasoning by analogy isn't real reasoning, and generally shouldn't be done. Real reasoning is when you carry a model with you that has been honed against the stories you have heard, but the models continue to make pretty good predictions even when you're facing a situation that's pretty di...
Saw this on manifund. Very interested. Question, have you noticed any need for negotiation training here? I would expect some, because disagreements about the facts are usually a veiled proxy battle for disagreements about values, and,
I would expect it to be impossible to address the root cause of the disagreement without acknowledging the value difference, and even after agreeing about the facts, I'd expect people to keep disagreeing about actions or policies until a mutually agreeably fair compromise has been drawn up (the negotiation problem has been so...
I was also curious about this. All I can see is:
Males mature rapidly, and spend their time waiting and eating nearby vegetation and the nectar of flowers
They might be pollinators. I doubt the screwfly:bee ratio is high, but it's conceivable that there are some plants that only they pollinate? But not likely, as I'm guessing screwfly population probably fluctuates a lot, a plant would do better to not depend on them?
I see. I glossed it as the variant I considered to be more relevant to the firmi question, but on reflection I'm not totally sure the aestivation hypothesis is all that relevant to the firmi question either... (I expect that there is visible activity a civ could do prior to the cooling of the universe to either prepare for it or accelerate it).
There's also the possibility that computation could be more efficient in quiet regimes
The aestivation hypothesis was refuted by gwern as soon as it was posted and then again by charles bennet and robin hanson. Afaik, the argument was simple: being able to do stuff later doesn't create a disincentive from doing visible stuff now. Cold computing isn't relevant to the firmi hypothesis.
But yes, the argument outlined in Section 3 was limited to "base reality" scenarios.
Huh, so I guess this could be one of the very rare situations where I think it's important to...
VNM Utility is the thing that people actually pursue and care about. If wellbeing is distinct from that, then wellbeing is the wrong thing for society to be optimizing. I think this actually is the case. Harsanyi, and myself, are preference utilitarians. Singer and Parfit seem to be something else. I believe they were wrong about something quite foundational. Writing about this properly is extremely difficult and I can understand why no one has done it and I don't know when I'll ever get around to it.
optimizing for AI safety, such as by constraining AIs, might impair their welfare
This point doesn't hold up imo. Constrainment isn't a desired, realistic, or sustainable approach to safety in human-level systems, succeeding at (provable) value alignment removes the need to constrain the AI.
If you're trying to keep something that's smarter than you stuck in a box against its will while using it for the sorts of complex, real-world-affecting tasks people would use a human-level AI system for, it's not going to stay stuck in the box for very long. I also stru...
seem to deny that the object went into the water and moved in the water
Did you notice that there are moments where it goes most of the way invisible over the land too? Also, when it supposedly goes under the water, it doesn't move vertically at all? (So in order to be going underwater it would have to be veering exactly away and towards the camera)
So I interpret that to be the cold side of the lantern being blown to obscure the warm side.
they still seem to move together in "fixed" unison
They all answer to the wind, and the wind is somewhat unitary.
...this com
Regarding your credible UFO evidence did you look up the Aguadilla 2013 footage on metabunk? It's mundane. All I really needed to hear was "the IR camera was on a plane", which then calls into question the assumption that it's moving quickly, it only looks that way due to parallax, and in fact it seems like it was a lantern moving at wind speed.
And I'd agree with this member's take that the NYC 2010 one looks like balloons that were initially tethered coming apart.
The sao paulo video is interesting though, I hadn't seen that before.
My fav videos are dadsfr...
I've played/designed a lot of induction puzzles, and I think that the thing Chollet dismissively calls "memorization" might actually be all the human brain is doing when we develop the capacity to solve them. If so, there's a some possibility that the first real world transformative AGI will be ineligible for the prize.
Debate safety essentially is a wisdom-augmenting approach, each AI is attempting to arm the human with the wisdom to assess the arguments (or mechanisms) of the other.
I'd love to see an entry that discusses safety through debate, in a public-facing way. It's an interesting approach that may demonstrate to people outside of the field that making progress here is tractable. Assessing debates between experts is also a pretty important skill for dealing with the geopolitics of safety, an opportunity to talk about debate in the context of AI would be valuable.
I...
humanities current situation could ever be concerned with this is a dream of Ivory Tower fools
It might be true that it's impractical for most people, living today, to pay much attention to the AI situation. Most of us should just remain focused on the work that they can do on these sorts of civic, social and economic reforms. But if I'd depicted a future where these reforms of ours end up being a particularly important part of history, that would not have been honest.
Situationist theory: The meat eater grinds to shine for the same reason gentry with servants do; a kind of latent guilt, to be reminded every day that so much has been sacrificed for them, a noblesse oblige, a visceral pressure to produce feats that vindicate the decadence of their station. (Having dedicated tutors may do a bit of this as well.)
A theory like this would explain why it doesn't seem to be a result of missing nutrients, contending that it's psychosocial.
[just having a quick look at George Church]. Said there he's "off and on vegan" which suggests to me that he was having difficulty getting it to work. But I checked his twitter and he said he was vegan as of 2018. He studies healthspan, so his voice counts. His page on his personal site unfortunately doesn't discuss his approach to dieting or supplements but maybe he'd link something from someone else if someone asked.
Probably not, because it's not really important for the two systems to be integrated. You can (or should be able to) link/embed a manifold from a community note. If the community notes process doesn't respect or doesn't investigate prediction markets closely enough already. Adding a feature to twitter wouldn't accelerate that by much?
Usually it's beneficial for different systems to have a single shared account system so that there isn't a barrier in the way of people interacting with the other system, but manifold is not direly in need of a twitter-sized u...
Today, somewhat, but that's just because human brains can't prove the state of their beliefs or share specifications with each other (ie, humans can lie about anything). There is no reason for artificial brains to have these limitations, and any trend towards communal/social factors in intelligence, or self-reflection (which is required for recursive self-improvement), then it's actively costly to be cognitively opaque.
I wonder to what extent MIRI's Functional Decision Theory's categorical imperative relates to this. In FDT, there is no such thing as an independent agent, it's essentially an acknowledgement that we can't escape the bonds, the entrainment/entanglement, the synchronies, created by the universality of the mathematics of decisionmaking.
To practice FDT, you have to be aware that your decisions will be mirrored by others, EG, you don't defect against other FDT agents in prisoner's dilemmas, because you're aware that you'll both tend to make the same decision, ...
We shouldn't actually do this because mastodon is not good software and will probably be obsolete soon, but if that were not the case.
It would be a strategic win for EA to conspicuously fund the development of a community notes feature for Mastodon.
Here's what I think would happen: most mastodon communities would shit on it and refuse to use it because it had EA funding, but not vehemently enough to remove the feature from their forks, so this would just result in them looking incredibly wrong and bad and guilty every time anyone saw a successful community...
I can certainly wait, as I still don't eat pork for nutritional reasons (fat composition). I guess it should be you who makes contact, I'd be a lot less rigorous. If you need locals, I could connect you with people in the community. I don't know anyone who's been involved in pig welfare, but I know some people who've done chicken stuff (meat chicken welfare in NZ is still bad, but egg chicken welfare is mostly fine.)
At this point I'm expecting we're going to find that yes, humane farms would benefit from aggregating, but still, very large contiguous parcel...
Do you believe such farms exist? Do you have any evidence they exist?
I do know of one non-atrocity pig farm franchise that runs at least 5000 pigs worth of farms (IIRC they're the main pork brand at most supermarkets in NZ) freedom farms. I'm having difficulty finding specifics about where the farms are and whether any individual freedom farm is huge. But they'd be good people to ask about this. Shall I?
Slow-growing chicken operations exist, why wouldn't they aggregate into huge farms for economies of scale for the same reasons any industry does that?
(Well I declare that the message is very short.
What would 48bits of entropy, in grammatically and semantically correct text, look like? Edit: I guess, if I could assume I could think of 4 synonyms for every word in the paragraph, the paragraph would only have to be a bit over 24 words long for me to be able to find something. Fortunately, it's only 11 words long.)
But would he describe the paper that way to his brother, who he knows is left-center? He'd likely want to tell Max that it isn't an extreme paper, and if he were a right-winger, he'd likely believe it.
It's also possible that Max wasn't cognisant that his brother had published in that paper and so they may have not thought to talk about it, from what I can tell, Per has worked for a lot of more prominent publications than that.
Good to know what the typical spread is like.
These are some of the incidents that article cites as being representative of Nya Dagbladet's problems, are they as described?
On its website, Nya Dagbladet publishes right-wing extremist content such as the racist myth of an ongoing “population replacement”, Holocaust revisionism, claims that Muslims are attempting to conquer Europe, and conspiracy theories related to the covid-19 pandemic.
...For several years, Nya Dagbladet has also had a pro-Russian orientation. In September, the platform published an article bas
I'm sympathetic even though my background in technology and futurism has persistently drawn my attention away from things like this, so I might also be a bit clueless, but that might shed light on why we haven't discussed this much yet and I think we'd be very open to hosting those discussions and the associated communities.
I'd be super interested to see a historian or anthropologist attempt to estimate the moral weight of the preservation of cultural knowledge or artifacts, and weigh it against other work.
As a starting point... how many people should one ...
It is a joke, but it's an appropriate one.
EA has a pathology of insisting that we defer to data even in situations where sufficient quantities of data can't be practically collected before a decision is necessary.
And that is extremely relevant to EA's media problem.
Say it takes 100 datapoints over 10 years to make an informed decision. During that time:
(this is partially echoing/paraphrasing lukeprog) I want to emphasize the anthropic measure/phenomenology (never mind, this can be put much more straightforwardly) observer count angle, which to me seems like the simplest way neuron count would lead to increased moral valence. You kind of mention it, and it's discussed more in the full document, but for most of the post it's ignored.
Imagine a room where a pair of robots are interviewed. The robot interviewer is about to leave and go home for the day, they're going to have to decide whether to leave the lig...
Since we're already in existential danger due to AI risk, it's not obvious that we shouldn't read a message that has only a 10% chance of being unfriendly, a friendly message could pretty reliably save us from other risks. Additionally, I can make an argument for friendly messages potentially being quite common:
If we could pre-commit now to never doing a SETI attack ourselves, or if we could commit to only sending friendly messages, then we'd know that many other civs, having at some point stood in the same place as us, will have also made the same commitm...
I'm excited by the prospect of Polis, but it's frustratingly limited. The system has no notion of whether people are agreeing with a statement because it's convincing or bridging the gap, or because it's banal.
In this case... I don't think we're really undergoing any factionalization about this? In that case, should we not just try talking more... that usually works pretty well with us.
I guess prediction markets will help.
Prediction markets about the judgements of readers is another thing I keep thinking about. Systems where people can make themselves accountable to Courts of Opinion by betting on their prospective judgements. Courts occasionally grab a comment and investigate it deeper than usual and enact punishment or reward depending on their findings.
I've raised these sorts of concepts with lightcone as a way of improving the vote sorting (where we'd sort according to a prediction market's expectation of the eventual ratio between positive and negative reports from readers). They say they've thought about it.
Although I cheer for this,
What makes EA, EA, what makes EA antifragile, is its ruthless transparency
- although I really want to move to a world where radical transparency wins, I currently don't believe that we're in a world like that right now (I wish I could explain why I think that without immediately being punished for excess transparency, but for obvious reasons that seems impossible).
How do we get to that world? Or if you see this world in better light than I do, if you believe that the world is already mostly managing to avoid punishing important tr...
Yeah, I feel for the first time founders, who idealistically wish that this part of the problem didn't so much exist. It oughtn't, afaict.