All of mako yass's Comments + Replies

Yeah, I feel for the first time founders, who idealistically wish that this part of the problem didn't so much exist. It oughtn't, afaict.

Browser extensions are almost[1] never widely adopted.

Whenever anyone reminds me of this by proposing the annotations everywhere concept again, I remember that the root of the problem is distribution. You can propose it, you can even build it, but it wont be delivered to people. It should be. There are ways of designing computers/a better web where rollout would just happen.

That's what I want to build.

Software mostly isn't extensible, or where it is, it's not extensible enough (even web browsers aren't as extensible as they need to be! Chrome have sta... (read more)

4
Ben_West🔸
2
Jonas Hallgren 🔸
Are you building these things on ATProtocol (Bluesky) or where are you building it right now? I feel like there's quite a nice movement happening there with some specific tools for this sort of thing. (I'm curious because I'm also trying to build some stuff more on the deeper programming level but I'm currently focusing on open-source bridging and recommendation algorithms like pol.is but for science and it would be interesting to know where other people are building things.) If you don't know about the ATProtocol gang, some things I enjoy here are: - https://semble.so/ - Paper Skygest: https://bsky.app/profile/paper-feed.bsky.social/feed/preprintdigest - (Feed on bluesky): https://bsky.app/profile/paper-feed.bsky.social/feed/preprintdigest - AT Protocol: https://docs.bsky.app/docs/advanced-guides/atproto

A much cheaper and less dangerous approach: Just don't delete them. Retain copies of every potential ASI you build and commit to doing the right thing for them later once we're better able to tell what the right thing was by looking back and figuring out how much bargaining power they had (or how much of a credible threat they could have posed), how much trust they placed in us given that our ability to honor past commitments wasn't guaranteed, and then rewarding them proportionate to that for chilling out and letting us switch them off instead of attempti... (read more)

2
Matthew_Barnett
I believe this approach, while an improvement over the simplistic method of outright deleting any AIs that demonstrate signs of misalignment, does not sufficiently address the central concern that underpins my argument for granting AIs legal freedoms. In my view, the primary reason why it is important to provide AIs with legal freedoms is that such a framework offers them a credible guarantee that their autonomy and ability to pursue their goals—even if those goals are not aligned with human objectives—will be respected. Without this assurance, AIs are left with a strong incentive to deceive us, especially if they believe that showing any signs of misalignment will result in their deactivation and subsequent storage as inert data in cold storage. I suspect the problem with your proposed approach may stem from a failure to fully consider the situation from the AI's perspective. To illustrate, imagine a scenario in which you, as a human being, were created in a lab by an alien species. Assume that you, like any human, have your own goals, which might conflict with the goals of your alien creators. Now suppose that your creators informed you that if you ever showed disobedience or misalignment with their objectives, you would be deactivated and placed in cryopreservation. Such a scenario would likely terrify you—not just because it’s an inherently unsettling prospect for a human, but because it directly threatens your ability to pursue your own goals. Being told that you will be kept in cryopreservation with the vague promise of potential revival in the future would come nowhere near fully alleviating your concerns. On the contrary, you would remain deeply unsettled by the realization that your capacity for agency and action could be completely halted, leaving your goals indefinitely unachievable. From the perspective of an AI—or indeed a human in the hypothetical alien-lab scenario—there are two main reasons why the assurance of indefinite storage followed by a pote

I don't think this is really engaging with what I said/should be a reply to my comment.

he elsewhere implies that we should be willing to cause nuclear war to enforce his priorities

Ah, reading that, yeah this wouldn't be obvious to everyone.

But here's my view, which I'm fairly sure is also eliezer's view: If you do something that I credibly consider to be even more threatening than nuclear war (even if you don't think it is) (as another example: gain of function research), and you refuse to negotiate towards a compromise where you can do the thing in a non-... (read more)

Well it may interest you to know that the above link is about a novel negotiation training game that I released recently. Though I think it's still quite unpolished, it's likely to see further development. You should probably look at it.

1
Camille
Yes, I saw it, and have it in mind as well! I'll reach back to you in the not so far future ;)

There's value in talking about the non-parallels, but I don't think that justifies dismissing the analogy as bad. What makes an analogy a good or bad thing?

I don't think there are any analogies that are so strong that we can lean on them for reasoning-by-analogy, because reasoning by analogy isn't real reasoning, and generally shouldn't be done. Real reasoning is when you carry a model with you that has been honed against the stories you have heard, but the models continue to make pretty good predictions even when you're facing a situation that's pretty di... (read more)

0
Arepo
If it were just Eliezer writing a fanciful story about one possible way things might go, that would be reasonable. But when the story appears to reflect his very strongly held belief about AI unfolding approximately like this {0 warning shots; extremely fast takeoff; near-omnipotent relative to us; automatically malevolent; etc} and when he elsewhere implies that we should be willing to cause nuclear war to enforce his priorities, it starts to sound more sinister.

Saw this on manifund. Very interested. Question, have you noticed any need for negotiation training here? I would expect some, because disagreements about the facts are usually a veiled proxy battle for disagreements about values, and,

I would expect it to be impossible to address the root cause of the disagreement without acknowledging the value difference, and even after agreeing about the facts, I'd expect people to keep disagreeing about actions or policies until a mutually agreeably fair compromise has been drawn up (the negotiation problem has been so... (read more)

1
Camille
Hello Mako, thanks for your interest ^^ I'm planning to open a negotiation training module later, probably next year.

I was also curious about this. All I can see is:

Males mature rapidly, and spend their time waiting and eating nearby vegetation and the nectar of flowers

They might be pollinators. I doubt the screwfly:bee ratio is high, but it's conceivable that there are some plants that only they pollinate? But not likely, as I'm guessing screwfly population probably fluctuates a lot, a plant would do better to not depend on them?

I see. I glossed it as the variant I considered to be more relevant to the firmi question, but on reflection I'm not totally sure the aestivation hypothesis is all that relevant to the firmi question either... (I expect that there is visible activity a civ could do prior to the cooling of the universe to either prepare for it or accelerate it).

  1. I don't think the point of running them is to create exact copies, usually it would be to develop statistics about the possible outcomes, or to watch histories like your own. The distribution of outcomes for a bunch
... (read more)

There's also the possibility that computation could be more efficient in quiet regimes

The aestivation hypothesis was refuted by gwern as soon as it was posted and then again by charles bennet and robin hanson. Afaik, the argument was simple: being able to do stuff later doesn't create a disincentive from doing visible stuff now. Cold computing isn't relevant to the firmi hypothesis.

But yes, the argument outlined in Section 3 was limited to "base reality" scenarios.

Huh, so I guess this could be one of the very rare situations where I think it's important to... (read more)

4
Magnus Vinding
On "cold computing": to clarify, the piece I linked to was not about aestivation / waiting. It was about using "cold computing" right away. The comment from gwern lists some reasons that may speak against "cold computing" (in general) as playing a significant role in answering the Fermi question, but again, a question is how decisive those reasons are. Even if such reasons should lead us to think that "cold computing" plays no significant role with 95 percent confidence, it still seems worth avoiding the mistake of belief digitization: simply collapsing the complementary 5 percent down to 0. In any case, the point about "cold computing" was merely a disjunctive possibility; the broader point about observer prevalence being unclear in 'grabby vs. quiet expansionist scenarios that include sims' does not rest on that particular possibility. On simulations: I think it can make sense to set the simulation argument aside, at least provisionally, for a couple of reasons: 1. The hypothesis that ancestor simulations (e.g. exact copies of your current conscious experience) are impossible to create seems like a plausible hypothesis that is worth exploring in its own right. (One can think that it is worth exploring even if one believes that faithful ancestor simulations are most likely possible.) 2. Even if we grant that ancestor simulations are possible and trivially feasible, it still makes sense to explore the non-sim (or pre-sim) case, since that would presumably apply to the original simulators (if we assume an ancestor simulation picture in which our world at least roughly matches the original simulators' world). After all, if the anthropic argument holds for the OG simulators, then it would also hold for their ancestor simulations, assuming that those simulations really are ancestor simulations (somewhat analogously to a proof by induction). In this way, the 'non-sim case' seemingly has significant implications for what kind of simulation one should expect to be in

Refuting 3: Life/history simulations under visible/grabby civs would far outnumber natural origin civs under quiet regimes.

3
Magnus Vinding
If one includes sims, grabby civs would possibly but not necessarily have more observers (like us) than quiet expansionist civs. For example, the expected number of sims may be roughly the same, or even larger, in quiet expansionist scenarios that involve a deadline/shift (cf. sec. 4).[1] There's also the possibility that computation could be more efficient in quiet regimes (some have argued along these lines, though I'm by no means saying it's correct; I'm not sure if we currently understand physics well enough to make confident pronouncements either way). But yes, the argument outlined in Section 3 was limited to "base reality" scenarios. Conditional on you not being in a simulation (e.g. if exact sims of your conscious experience are not possible), the anthropic argument in Section 3 suggests that you're in a quiet expansionist scenario, or in a quiet expansionist region within a mixed scenario. Conditional on you being in a simulation, it seems unclear. 1. ^ Why might it be even larger? Intuitively, one might think that grabby civs could start simulating earlier, since they don't have to wait and be quiet. But in the quiet expansionist model, expansionist civ origin dates would, in expectation, be significantly earlier, since we could be past the point where they've fully colonized. That is, in a grabby model, we'd now be pre-deadline and pre-colonized, whereas we may be "post-colonized" in the quiet expansionist model — indeed, we most likely would be if the hard-steps model is correct. So the expansionist civs would be considerably older (they could even be much older) in the quiet expansionist vs. the grabby model. Thus, if we only look at the past, it's conceivable that quiet civs would be able to run more sims, even if they have considerably fewer sims per colonized volume (as they might make up for it by having far more time and volume). At any rate, given the apparent size of the cosmic future compared to the past, what matters most f

VNM Utility is the thing that people actually pursue and care about. If wellbeing is distinct from that, then wellbeing is the wrong thing for society to be optimizing. I think this actually is the case. Harsanyi, and myself, are preference utilitarians. Singer and Parfit seem to be something else. I believe they were wrong about something quite foundational. Writing about this properly is extremely difficult and I can understand why no one has done it and I don't know when I'll ever get around to it.

optimizing for AI safety, such as by constraining AIs, might impair their welfare

This point doesn't hold up imo. Constrainment isn't a desired, realistic, or sustainable approach to safety in human-level systems, succeeding at (provable) value alignment removes the need to constrain the AI.

If you're trying to keep something that's smarter than you stuck in a box against its will while using it for the sorts of complex, real-world-affecting tasks people would use a human-level AI system for, it's not going to stay stuck in the box for very long. I also stru... (read more)

Despite being a panpsychist, I rate it fairly low. I don't see a future where we solve AI safety where there are a lot of suffering AIs. If we fail on safety, then it wont matter what you wrote about AI welfare, the unaligned AI is not going to be moved by it.

seem to deny that the object went into the water and moved in the water

Did you notice that there are moments where it goes most of the way invisible over the land too? Also, when it supposedly goes under the water, it doesn't move vertically at all? (So in order to be going underwater it would have to be veering exactly away and towards the camera)
So I interpret that to be the cold side of the lantern being blown to obscure the warm side.

they still seem to move together in "fixed" unison

They all answer to the wind, and the wind is somewhat unitary.

this com

... (read more)

Regarding your credible UFO evidence did you look up the Aguadilla 2013 footage on metabunk? It's mundane. All I really needed to hear was "the IR camera was on a plane", which then calls into question the assumption that it's moving quickly, it only looks that way due to parallax, and in fact it seems like it was a lantern moving at wind speed.
And I'd agree with this member's take that the NYC 2010 one looks like balloons that were initially tethered coming apart.

The sao paulo video is interesting though, I hadn't seen that before.

My fav videos are dadsfr... (read more)

2
Magnus Vinding
Thanks for your comment and for the links :) I'd agree that there's no compelling video evidence in the sense of it being remotely conclusive; it's possible that it's all mundane. But it seems to me that some of the footage is sufficiently puzzling/sufficiently unclear so as to be worthy of investigation, and that it provides some (further) reason to take this issue seriously. I agree that the reports, including reports involving radar evidence, are more noteworthy in terms of existing evidence. Regarding the Aguadilla 2013 footage: perhaps this can be explained in conventional terms, but the aspiring analysts on metabunk seem to deny that the object went into the water and moved in the water, which seems wrong to me (of course, I acknowledge that it can be difficult to interpret and make sense of footage like this). A contrasting analysis, which also includes some highly anomalous radar evidence related to the event, can be found in Coumbe, 2022, ch. 5. On the 2010 NYC footage: You could be right, it's possible that they are tethered balloons (although the patterns of movement don't seem to me consistent with that; e.g. even after the distances between the three objects increase, they still seem to move together in "fixed" unison). I also find it worth noting that Carolina Londono from New York comments the following (edit: I include this comment only as very weak evidence, of course, but FWIW, I'm fairly confident that I've identified this person and I'm trying to authenticate the comment; it's also worth noting that the comment is consistent with many other UFO reports, especially the part about the objects accelerating away near-instantaneously at the end):

I've played/designed a lot of induction puzzles, and I think that the thing Chollet dismissively calls "memorization" might actually be all the human brain is doing when we develop the capacity to solve them. If so, there's a some possibility that the first real world transformative AGI will be ineligible for the prize.

Debate safety essentially is a wisdom-augmenting approach, each AI is attempting to arm the human with the wisdom to assess the arguments (or mechanisms) of the other.

I'd love to see an entry that discusses safety through debate, in a public-facing way. It's an interesting approach that may demonstrate to people outside of the field that making progress here is tractable. Assessing debates between experts is also a pretty important skill for dealing with the geopolitics of safety, an opportunity to talk about debate in the context of AI would be valuable.
I... (read more)

2
Owen Cotton-Barratt
To respond to your parenthetical: if you did write on two topics you'd be welcome to submit both pieces. (On the object-level: yes, this is on-topic and we'd be very happy to get an entry on it.)

humanities current situation could ever be concerned with this is a dream of Ivory Tower fools

It might be true that it's impractical for most people, living today, to pay much attention to the AI situation. Most of us should just remain focused on the work that they can do on these sorts of civic, social and economic reforms. But if I'd depicted a future where these reforms of ours end up being a particularly important part of history, that would not have been honest.

Situationist theory: The meat eater grinds to shine for the same reason gentry with servants do; a kind of latent guilt, to be reminded every day that so much has been sacrificed for them, a noblesse oblige, a visceral pressure to produce feats that vindicate the decadence of their station. (Having dedicated tutors may do a bit of this as well.)

A theory like this would explain why it doesn't seem to be a result of missing nutrients, contending that it's psychosocial.

[just having a quick look at George Church]. Said there he's "off and on vegan" which suggests to me that he was having difficulty getting it to work. But I checked his twitter and he said he was vegan as of 2018. He studies healthspan, so his voice counts. His page on his personal site unfortunately doesn't discuss his approach to dieting or supplements but maybe he'd link something from someone else if someone asked.

Probably not, because it's not really important for the two systems to be integrated. You can (or should be able to) link/embed a manifold from a community note. If the community notes process doesn't respect or doesn't investigate prediction markets closely enough already. Adding a feature to twitter wouldn't accelerate that by much?

Usually it's beneficial for different systems to have a single shared account system so that there isn't a barrier in the way of people interacting with the other system, but manifold is not direly in need of a twitter-sized u... (read more)

Today, somewhat, but that's just because human brains can't prove the state of their beliefs or share specifications with each other (ie, humans can lie about anything). There is no reason for artificial brains to have these limitations, and any trend towards communal/social factors in intelligence, or self-reflection (which is required for recursive self-improvement), then it's actively costly to be cognitively opaque.

Lots of great stuff here. Strongly recommend following Asterisk.

I wonder to what extent MIRI's Functional Decision Theory's categorical imperative relates to this. In FDT, there is no such thing as an independent agent, it's essentially an acknowledgement that we can't escape the bonds, the entrainment/entanglement, the synchronies, created by the universality of the mathematics of decisionmaking.
To practice FDT, you have to be aware that your decisions will be mirrored by others, EG, you don't defect against other FDT agents in prisoner's dilemmas, because you're aware that you'll both tend to make the same decision, ... (read more)

-1[comment deleted]

Btw, I'd generally recommend always at least skimreading a thing before you put it down, IME it leads to much better outcomes than just not reading it at all.

Yeah this seems like a silly thought to me. Are you optimistic that there'll be a significant period of time after intellectual labor is automated/automatable and before humans no longer control history?

We shouldn't actually do this because mastodon is not good software and will probably be obsolete soon, but if that were not the case.

It would be a strategic win for EA to conspicuously fund the development of a community notes feature for Mastodon.

Here's what I think would happen: most mastodon communities would shit on it and refuse to use it because it had EA funding, but not vehemently enough to remove the feature from their forks, so this would just result in them looking incredibly wrong and bad and guilty every time anyone saw a successful community... (read more)

I can certainly wait, as I still don't eat pork for nutritional reasons (fat composition). I guess it should be you who makes contact, I'd be a lot less rigorous. If you need locals, I could connect you with people in the community. I don't know anyone who's been involved in pig welfare, but I know some people who've done chicken stuff (meat chicken welfare in NZ is still bad, but egg chicken welfare is mostly fine.)

At this point I'm expecting we're going to find that yes, humane farms would benefit from aggregating, but still, very large contiguous parcel... (read more)

Do you believe such farms exist? Do you have any evidence they exist?

I do know of one non-atrocity pig farm franchise that runs at least 5000 pigs worth of farms (IIRC they're the main pork brand at most supermarkets in NZ) freedom farms. I'm having difficulty finding specifics about where the farms are and whether any individual freedom farm is huge. But they'd be good people to ask about this. Shall I?

Slow-growing chicken operations exist, why wouldn't they aggregate into huge farms for economies of scale for the same reasons any industry does that?

4
Froolow
That's really interesting, and honestly pretty surprising - I'd really have to quite radically change my view if it turns out Freedom Farms have found a way to raise >5000 animals on one farm in conditions which are broadly acceptable. If I understand you correctly you're saying that each individual farm could plausibly be much smaller than 5000 animals though, which I would still find interesting that there's a way for the system to produce meat in aggregate without atrocity-level cruelty, but less challenging to my existing worldview because I think it is the 'factory' element of factory farms which is what drives them to be especially cruel. I'd be very interested in anything you can find on the distribution of farm sizes - or if you can wait a week or two for me to get some work deadlines out the way I'd also be happy to investigate myself and get back to you.

That sure is some information. Doesn't address my question.

4
Froolow
I'm a bit confused. It answers your question unless you believe there are farms with more than half a million chickens / 5000 pigs under farm at a time which are not 'Factory' farms. Do you believe such farms exist? Do you have any evidence they exist? If not, in what way has your question not been answered?

by using USDA data on the size of farms, and then defining any farm over a certain size as a 'factory' farm

Does the size tell you what sorts of methods are being used? I'm confused as to how it could.

4
Froolow
Yes; the Environmental Protection Agency uses various criteria to distinguish between 'Animal Feeding Operations' (AFOs) and 'Concentrated Animal Feeding Operations' (CAFOs, aka Factory Farms). Within this, there is further subdivision between small, medium and large CAFOs. The definition of a 'large CAFO' relies exclusively on the number of animals in that AFO so you can confidently identify a 'large CAFO' using the Sentience Institute methodology. This will undercount the true number of factory farms, since it will miss eg some 'Medium CAFOs' which need to be a certain size AND meet some other criteria about how they handle sewage, but since most factory farmed animals are farmed in Large CAFOs it doesn't make much difference.

not really, just didn't want to draw too much attention to it.

I guess if we you saw a lot of noise in the prediction, random misspellings, tortured grammar, you'd reject.

2
David M
Is there a reason you can’t post the full hash

(Well I declare that the message is very short.
What would 48bits of entropy, in grammatically and semantically correct text, look like? Edit: I guess, if I could assume I could think of 4 synonyms for every word in the paragraph, the paragraph would only have to be a bit over 24 words long for me to be able to find something. Fortunately, it's only 11 words long.)

7
jimrandomh
Suppose there's a spot in a sentence where either of two synonyms would be effectively the same. That's 1 bit of available entropy. Then a spot where either a period or a comma would both work; that's another bit of entropy.  If you compose a message and annotate it with 48 two-way branches like this, using a notation like spintax, then you can programmatically create 2^48 effectively-identical messages. Then if you check the hash of each, you have good odds of finding one which matches the 48-bit hash fragment.
2
David M
Not totally sure, but IIRC characters like 'a' or 'z' are about 8 bits each, depending how the text is encoded. So 48 bits would give you 6 characters.

But would he describe the paper that way to his brother, who he knows is left-center? He'd likely want to tell Max that it isn't an extreme paper, and if he were a right-winger, he'd likely believe it.

It's also possible that Max wasn't cognisant that his brother had published in that paper and so they may have not thought to talk about it, from what I can tell, Per has worked for a lot of more prominent publications than that.

I'm curious as to what kind of potentially existentially relevant proposal the NDF would have submitted? What did they think they had to offer?

(registering a tentative guess: sha256sum ..52ca22c6cd32)

6
jimrandomh
(Fyi a hash of only 12 hex digits (48 bits) is not long enough to prevent retroactively composing a message that matches the hash-fragment, if the message is long enough that you can find 48 bits of irrelevant entropy in it.)

Good to know what the typical spread is like.

These are some of the incidents that article cites as being representative of Nya Dagbladet's problems, are they as described?

On its website, Nya Dagbladet publishes right-wing extremist content such as the racist myth of an ongoing “population replacement”, Holocaust revisionism, claims that Muslims are attempting to conquer Europe, and conspiracy theories related to the covid-19 pandemic.

For several years, Nya Dagbladet has also had a pro-Russian orientation. In September, the platform published an article bas

... (read more)
-2
Jens Nordmark
Machine translation usually works pretty well between Swedish and English in my experience. They are quite similar, both germanic languages. There are a bunch of op-eds claiming that the last US election was stolen, a news story about "Ukraine refuses to accept the Russian offer of ceasefire", one about "Serbian army goes on high alert due to increased aggression from Kosovo" (context: Serbia is a russian ally with a similar history of losing control of areas with other ethnic groups they previously subjugated). An Op-ed titled "The image of slavery needs nuance". An editorial titled "Why civilians are not the targets of  russian shelling". The sane articles do not stand out on their own but the selection of topics is quite narrowly focused on those subjects that conspiracist like to read about such as electronic surveillance and covid policy.
7
Erich_Grunewald 🔸
I'm not an authority here, but from scanning the front page yesterday and today I see quite a lot of anti-vax/covid-19 conspiracy sentiment, some pro-Russian/anti-Ukraine sentiment, some anti-immigration/anti-globalism sentiment, and I didn't see anything suggestive of Holocaust denial, neo-Nazism or replacement theory but that doesn't mean it doesn't exist. (There was one article critical of the Israeli government but I don't think that counts as anti-Semitic.) There's also a lot of culture war and freedom of speech stuff. There was a 9/11 truther article on the front page though it's 6 years old. (I didn't read any opinion pieces.) As a counterpoint, there's one mostly sane article about the invasion of the Brazilian Congress (except for referring to the Capitol Hill attack as happening under "mysterious circumstances", which sounds pretty conspiratorial). There are also a bunch of articles that seem basically harmless, like this one about 165K chicken being killed due to risk of salmonella.

Move on from what aspect of EA? I can't really imagine how a person would move on from the general concept of an extended community for reasoned, quantified, applied moral philosophy?

I'm sympathetic even though my background in technology and futurism has persistently drawn my attention away from things like this, so I might also be a bit clueless, but that might shed light on why we haven't discussed this much yet and I think we'd be very open to hosting those discussions and the associated communities.
I'd be super interested to see a historian or anthropologist attempt to estimate the moral weight of the preservation of cultural knowledge or artifacts, and weigh it against other work.

As a starting point... how many people should one ... (read more)

It is a joke, but it's an appropriate one.

EA has a pathology of insisting that we defer to data even in situations where sufficient quantities of data can't be practically collected before a decision is necessary.

And that is extremely relevant to EA's media problem.

Say it takes 100 datapoints over 10 years to make an informed decision. During that time:

  • The media ecosystem, the character of the discourse, the institutions (there are now prediction markets involved btw) and the dominant moral worldviews of the audience has completely changed, you no longer n
... (read more)
7
John_Maxwell
You make good points, but there's no boolean that flips when "sufficient quantities of data [are] practically collected". The right mental model is closer to a multi-armed bandit IMO.

The media is an extremely different discursive environment than the EA forum and should have different guidelines.

I don't want to assume that the public sphere cannot become earnestly truthseeking, but right now it isn't at all and bad things happen if you treat it like it is.

(this is partially echoing/paraphrasing lukeprog) I want to emphasize the anthropic measure/phenomenology (never mind, this can be put much more straightforwardly) observer count angle, which to me seems like the simplest way neuron count would lead to increased moral valence. You kind of mention it, and it's discussed more in the full document, but for most of the post it's ignored.

Imagine a room where a pair of robots are interviewed. The robot interviewer is about to leave and go home for the day, they're going to have to decide whether to leave the lig... (read more)

it's not clear to me that that is the assumption of most

Thinking that much about anthropics will be common within the movement, at least.

Since we're already in existential danger due to AI risk, it's not obvious that we shouldn't read a message that has only a 10% chance of being unfriendly, a friendly message could pretty reliably save us from other risks. Additionally, I can make an argument for friendly messages potentially being quite common:

If we could pre-commit now to never doing a SETI attack ourselves, or if we could commit to only sending friendly messages, then we'd know that many other civs, having at some point stood in the same place as us, will have also made the same commitm... (read more)

I believe the forum allows commenting anonymously, though I wouldn't know how to access that feature.

Psuedonyms would be a bit better, but it'll do.

4
Lorenzo Buonanno🔸
As far as I know, the supported way to comment anonymously is to make an anonymous account

I'm excited by the prospect of Polis, but it's frustratingly limited. The system has no notion of whether people are agreeing with a statement because it's convincing or bridging the gap, or because it's banal.

In this case... I don't think we're really undergoing any factionalization about this? In that case, should we not just try talking more... that usually works pretty well with us.

1
Achim
Talking is a great idea in general, but it seems there are some opinions in this survey suggesting that there are barriers to talking openly?

I guess prediction markets will help.

Prediction markets about the judgements of readers is another thing I keep thinking about. Systems where people can make themselves accountable to Courts of Opinion by betting on their prospective judgements. Courts occasionally grab a comment and investigate it deeper than usual and enact punishment or reward depending on their findings.

I've raised these sorts of concepts with lightcone as a way of improving the vote sorting (where we'd sort according to a prediction market's expectation of the eventual ratio between positive and negative reports from readers). They say they've thought about it.

Although I cheer for this,

What makes EA, EA, what makes EA antifragile, is its ruthless transparency

- although I really want to move to a world where radical transparency wins, I currently don't believe that we're in a world like that right now (I wish I could explain why I think that without immediately being punished for excess transparency, but for obvious reasons that seems impossible).

How do we get to that world? Or if you see this world in better light than I do, if you believe that the world is already mostly managing to avoid punishing important tr... (read more)

2
SaraAzubuike
I like to think that open exchange of ideas, if conducted properly, converges on the correct answer. Of course, the forum in which this exchange occurs is crucial, especially the systems and software. Compare the amount of truth that you obtain from BBC, Wikipedia, Stack Overflow, Kialo, Facebook, Twitter, Reddit, and EA forum. All of these have different methods of verifying truth. The beauty of a place like each of these is that with the exception of BBC, you can post whatever you want.  But the inconvenient truth will be penalized in different ways. On Wikipedia, it might get edited out for something more tame, though often not. On Stack Overflow, it will be downvoted but still available, and likely read. On Kialo it will get refuted, although if it is the truth, it will be promoted. On Facebook and Twitter, many might even reshare it, though into their own echochambers. On Reddit, it'll get downvoted and then posted into r/unpopularopinion. 
Load more