MakoYass

Bio

Participation
1

Longtermist writer, principled interactive system designer. https://aboutmako.makopool.com

Consider browsing my Lesswrong profile for interesting frontier (fringe) stuff https://www.lesswrong.com/users/makoyass

Comments
69

Topic Contributions
2

Some non-EAs worry about EA's effect on mental health

There's a really good point there, I'll restate it: People act like the difficult problems in front of them are the reason for the low moods they are having. As a result of misidentifying the source of the their low mood, they try to solve the mood problem by pouring themselves into working on the issue, but this often just wont work.

I think this is resting on a common myth about human psychology. There actually doesn't need to be a relationship between the difficulty of the problems in front of us and our emotional affect, or energy levels. It's a nonsequitur. No matter what problem is in front of you, there's always something you can do, some next step to take (if you don't know what the next step is then the next step is figuring out the next step!), and if you are walking forward as well as you can, you should be able to take satisfaction in that. If not, it's a health thing.

On the Vulnerable World Hypothesis

I've been thinking about transparent societies (democratic surveillance) for a while. While I'm still concerned free thought effects, where cultures living under radical transparency might develop a global preference falsification monoculture (situations where everyone in the open world is lying about what kind of world we want due to a repressive false consensus, crushing innovation, healthy criticism of mainstream ideas, etc)... that concern is decreasing as I go, I think it's going to turn out to be completely defeatable.

This will be approximate, I hope to do a full post about it eventually, but, a way of summing up my current view is...

  • Radical transparency is already steadily happening because it is incredibly useful (this surprises me too). Celebrity, twitter, Disclosure movements, open-source intelligence.
  • Weird people will always exist, you will always have to look at them, no amount of social pressure will make them go away, and some of them are critical specialists, who we need and love. Most of the thinkers and doers and processes of dialog that I actually admire and respect are weird in a way that is resilient to those anti-weird anti-free-thought effects that we were worried about, and I'm not really afraid of those effects at all on most days.

People will start to exult a new virtue of brazenness, once they see that free thought is a hard dependency for original work. Everyone I know (including you) already sees that is. Even transparency's best critics are stridently admitting that it is. On the other side: The people who stop exploring when they're being watched, will also very visibly stop being able to produce any original thoughts at all. Communities of othering and repression of small differences will quickly become so insane and ineffective that it will alienate everyone who ever believed in them, even their own members will start to notice (this is already happening under the radical transparency of twitter, which, note, interestingly, was completely voluntary, and mostly unremarked upon). And the people of brazenness will very visibly continue producing things, and so I expect brazenness to become fashionable.
Transparency will harm experimental work momentarily, if at all, before the great gardener sees in this new light, that the pitiful things they've been treading on all of this time were young flowers, learns to be more careful with rough and burgeoning things, and then western culture will adapt to transparency, and then we will fear it no more.

But the largest obstacle is that the technologies for fair transparency still don't quite exist yet ( consistent, reliable and convenient and trustworthy recording systems, methods for preventing harrassment mobs (DDOS protection, better spam prevention)). But I've found that the solutions to these issues (hardware, protocols, distributed storage, webs of trust) are not very complicated, and I think they'll arrive without much deliberate effort.

 

The next largest obstacle is mass simultaneous adoption, which you rightly single out with the discussion global democratic agreement. A transparent society is not interested in going halfway and building a panopticon or building a transparent state that will simply crumble in the face of an opaque one. I'm not confident that a global order will manage to get over the hump.

I have some pretty big objections to some of the things you said on this, though. Mainly that the advantages for the majority of signatories in universal transparency are actually great:

  • Even just on the margin: Note that celebrity is a kind of radical transparency. Note that the best practitioners tend to want to publish their work because the esteem of releasing it outweighs whatever competitive advantage it might have won their company to not release it.
  • It would allow their field to progress faster as a result of more sharing, and of course it means that they can progress more safely. You assert that you consider it unlikely that you'll live to see a catastrophe. I think that's uninformed. Longtermist arguments work even if the chance is small and far off, but the chance actually isn't small or far off. Ajeya Cotra found that biological anchors for intelligence set a conservative median estimate for the arrival of AGI in about 2050, but Ajeya's personal median estimate is now 2040. Regardless, (afaict) most decisionmakers have kids and will care what happens to their grandkids.

It's still going to be difficult to get every state that could harbor strong AI work to sign up for the unprecedented levels of reporting and oversight required to limit proliferation. I'm not hopeful that those talks will work out. I'll become hopeful if we reach a point where the leaders in the field safely demonstrate the presence of danger beyond reasonable doubt (Demonstration of Cataclysmic Trajectory). At that point, it might be possible.

What should CEEALAR be called?

"Athena" fails the reversal test, but the reversal test isn't always applicable when the historical continuity is actually part of the thing's appeal! Some things work that way!

What should CEEALAR be called?

In new zealand "Hospitality" is a field of craft encompassing tourism, hotels, and food services, so I'd feel like this name should be reserved for any cause areas in that domain, but is that not the case in the rest of the world?

Open Thread #36

That was probably the most load-bearing thought in my web-of-trust-based social network project. The lack of specificity about what endorsements mean is the reason twitter doesn't work (but would if it allowed and encouraged having a lot more alts), and I believe that once you've distinguished the kinds of trust, you'll have a very different, much more useful kind of thing.

Impact markets may incentivize predictably net-negative projects

I think there's an argument for the thing you were saying, though... Something like... If one marketplace forbids most foundational AI public works, then another marketplace will pop up with a different negative externality estimation process, and it wont go away, and most charities and government funders still aren't EA and don't care about undiscounted expected utility, so there's a very real risk that that marketplace would become the largest one.

I guess there might not be many people who are charitibly inclined, and who could understand, believe in, and adopt impact markets, but also don't believe in tail risks. There are lots of people who do one of those things, but I'm not sure there are any who do all.

Impact markets may incentivize predictably net-negative projects

There might be a market for that sort of ultimately valueless token now (or several months ago? I haven't been following the NFT stuff), I'm not sure there will be for long.

Impact markets may incentivize predictably net-negative projects

Crypto's inability to take debts or enact substantial punishments beyond slashing stakes is a huge limitation and I would like it if we didn't have to swallow that (ie, if we could just operate in the real world, with non-anonymous impact traders, who can be held accountable for more assets than they'd be willing to lock in a contract.)

Given enough of that, we would be able to implement this by just having an impact cert that's implicated in a catastrophe turn into debt/punishment, and we'd be able to make that disincentive a lot more proportional to the scale of its potential negative externalities, and we would be able to allow the market to figure out how big that risk is for itself, which is pretty much the point of an impact market.

Though, on reflection, I'm not sure I would want to let the market to decide that. The problem with markets is that they give us a max function, they're made of auctions, whoever pays most decides the price, and the views of everyone else are not taken into account at all. Markets, in a sense, subject us to the decisions of the people with the most extreme beliefs. Eventually the ones who are extreme and wrong go bankrupt and disappear, but I don't find this very reassuring, with rare catastrophic risks, which no market participant can have prior experience of. It's making me think of the unilateralist's curse.
So, yeah, maybe we shouldn't use market processes to price risk of negative externalities.

Impact markets may incentivize predictably net-negative projects

I should mention that Good Exchange/impact certs people have discussed this quite a bit. I raised concerns about this issue early on here. Shortly later, I posted the question would (myopic) general public good producers significantly accelerate the development of AGI? to Lesswrong.

My current thoughts are similar to harsimony's, it's probably possible to get the potential negative externalities of a job to factor into the price of the impact cert by having certs take on negative value/turn into liabilities/debts if the negative outcomes end up eventuating.
We don't know exactly how to implement that well yet, though.

Impact markets may incentivize predictably net-negative projects

Traders would adopt a competitor without negative externality mechanisms, but charities wouldn't, there will be no end buyers there, I wouldn't expect that kind of vicious amoral competitive pressure between platforms to play out.

Load More