Introduction


Those of us who are brave enough to straightly face up issues as they are have something intrinsically unique in their psychological framework. It's not common and basic to feel that the notion of Effective Altruism is an evidence we should pursue... And it's even less trivial to actually engage in it. Shouldn't we use any pertinent incentive we can possibly imagine?

To not only rely on 'spontaneous' threat awareness or noble ethical motivations but actually find very practical advantages, along with recreational, educational, social ones. Which to me is quite achievable and necessary if we really want to give it our all.
 

Channeling impact: the dominant and the unforeseen 

 

In a culture, those parameters are exceptionally good candidates. We have to be careful about what to use, how, when, and of course not to fall into any kind of zealotry. However we can thoughtfully upgrade our modalities; when being very serious about our situation, not using every possible tool we have is a mistake right?

Or do we somehow not withhold enough legitimacy?

Neglecting suitable tools looks like an existential error.

Incentives, Functionality, Originality... From the online hubs and platform structures to offline initiatives, there is room to be much more creative, bolder and pathfinding.

EA doesn't research game theory

EA doesn't fructify your money

EA doesn't innovate in design or ergonomy

 

Why haven't we done that yet? 

How can we properly ask and answer this question?

 

Framework
 

- Explore communication and shape to improve circulation of informations,
- Unconventional configurations facilitating the integration and encounter of new interested people,

- Smarter interactions with open-source properties,

- Horizontal peer-to-peer rewards...

 

These are some of the aspects affiliated to the subject which haven't been extensively cultivated yet (not solely in the context of EA).

 

We can think of plans where people won't have to change, they will just do something fun and it will change them. It can even be fun and rewarding while being educational.

 

The game notion here will come along the way at various times, in various means, this is not to say "we shouldn't be serious". I am deadly serious, playfulness is selected by natural evolution as the most efficient tool to learn, it's also connected with our capacity to deeply engage in tasks purposefully.

 

As introduced earlier I'm not just talking about aesthetics, art, and such things, I'm also talking about our social structures, how to build platforms and softwares that enhance our capacity to connect, act, plan. How to accelerate information propagation and impact conversion.

What are we not thinking?
 

A reaction regarding this matter could be to develop a hub for researchers, thinkers, and creators, that's built like a matching app. You can set up your matching demands based on any number of keywords (with synonyms and semantic isomorphism automatically linked to them), to be directly in contact with someone thinking about what you are thinking.

You could also post a specific demand, ask for a certain set of skills, publish a gig, give and get rewards, promote an idea, highlight an issue, index, map, blueprint our Zeitgeist.

If you want you can start a sub-hub on a specific subject, groupmatch with more than one person, start a chat, etc...

Match your ideas

Match your words

Match to do shit

 

Have purpose

Have incentives

Have a road
 

[Fiverr x GitHub x Wikipedia x Reddit x Discord x Telegram x Stack Overflow] all at once.

Why not?
 

We could scale that towards a wiki_rosetta stone_modular_toolkit. You enter a demand, have a result, choose a solution, used ones get rewarded.

So basically you would enter a question or click on a general concept, then propositions come out, a list of different types of said concept pop and various ways to do it appear, then you click on what you want, branching more and more towards a specific finite prototype. All the way down you see if and what others have done matching you in terms of similarity.

 

Again, if there is no solution yet you can post a demand and reward any contributor.
 

I think all this could be used very wrongfully, and that it's just a question of time before something analogous appears. So my take is we better do it fine and fast, optimize it towards alignment purpose and coordination towards rationality. We can also use these sort of mechanisms to fashion interactive tutorials exposing why alignment is insanely critical, hard, and dangerous. We would prioritize existential threats but it can be done with any subject. These interactive 'sandboxes' are a great learning material, when associated with design, narrations, and thought experiments.

 

How to Play


Still, is something like playfulness or aesthetics really pertinent?

Here is a really good critique of the "EA needs a better aesthetic/vibe" remark:

https://twitter.com/Jess_Riedel/status/1532827913996931074

 

One interesting point of the argument :
"Is this just a semantic distinction? If you already interpreted the vibe critique as just a critique of the particular non-profit CEA, or of the particular humans who post on the EA forum, then yes. But not if the critique made you think something was wrong with EA the framework."
 

The value of art isn't closed, it's just that EA as a discipline is not supposed to work on it.

Or is it?

Is there no science nor philosophy studying art/aesthetics?

What about game theory?The science of awe?
The mechanics and powers of cultural shifts?

As we can do more meta-studies on what to study (and that's really the kind of dynamic we already have) we can study how to use the tools that we know work well, refine the old, and find the new ones. It would even open a lot of job opportunities, with solid economic returns.

 

It is the meaning of Effective Altruism to be Effective regarding threats, isn't the lack of awareness and engagement a core issue? It might very well be the mother of all issues, as it is all we can work for: awareness and engagement. We just systemize it into specific contexts.


How does awareness arises without black swans and brutal events?

What happens when Nation level powers are panicking? 
Not just a state: lobbies, GAFAM, private entities...

Is it like a 'recession of rationality'?

Aren't 'non-coordination' and 'human misalignment' to rationality a major root of extinction risks?

 

The global population is needed in this connection to rationality, in ways that are deep and urgent. And not just global population, experts too.


"Risk from AI and other x-risk issues should be considered global health issues. It’s weird to me that we separate those out. I get it from the standpoint of organizational advantages and funding purposes, but a big part of public health is preparing for future risk. Yet I, someone who has studied public health for years, wasn’t aware of some of the bigger x-risks (besides pandemics) until maybe a year and a half ago."

https://twitter.com/h4yl3ylynn/status/1533119685838786560

 

I think there are many signs that we have to increase our reach, our clarity, and our transversality.


AGI + human hazard

 

Well, well.

https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities

 

It might be that the alignment must be done by humans too.

 

See; that's why gaming is so serious.


You don't see yet?
When the question is to be efficient, attracting people and fluidifying global organization is absolutely impactful... Gaming is engagement.
Let's pause for a second.

 

Mental note picked among others:

Do we appreciate that misaligned AGI could not only kill all humans but all life on earth as well?

How could one trustfully argue that it is not a possibility?

I would call that a crime against imagination.

 

Why is it seemingly so hard to conscientiously panic about that?


We built our civilization on war, colonization, persecution, sexism, slavery, we systematically made errors and horrors since... Since always, and now we have to face this. And other x-risks?

 

Wait, during a climate crisis?

So we are an intrinsic perpetuation of the human-error system. Odds are that this is a Great Filter.

How are you doin'?

Personally, I can't sleep at night.

Am I a little too tensed? This 'Great Filter' may just be a perceptual Filter.

 

Obviously (dramatically), there are wonders in this world, but beauty doesn't erase destruction, and it surely does not erase History.
The origin of AI misalignment is human misalignment.


Maybe life is that fragile. The complexity of our universe made it possible for life and sentience to emerge, but the tremendous amount of technicity, adjustments, and contingencies needed may end up with an approximation of life and sentience.

You. Me. Any other.

We need loopholes, and we can do better than natural selection.

(We sort of do that every day in lab: outperforming 'natural' selection)
 

That's why we should hack our thirst for easiness.

Use comfort against comfort. Playfulness against distraction.
 

We are an 'interface' dealing with interfaces' problems. Evolution has not prioritized the enhancement of our capacity to see reality as it is. It leads to the need to hack our appetencies if we want to be altruist-efficient. We need to speedrun the course of our nature against time.
 

Coordination

 

Related to these subjects: the question of mental health is very delicate. By dealing with the pivotal mishandled threats EA is willing to work on, it isn't abnormal to feel down and revolted, so I don't think it's a truly pertinent metric of our success:
https://twitter.com/elliot_olds/status/1532851073400152064

Nonetheless, it doesn't mean it's not concerning or that we won't do something about it, especially for newcomers, and the most engaged actors in our community. Another element connected to this is the sadness to sacrifice causes like "saving children" for "AI alignment".

-> I don't think it's either/or


There are synergetic win/win/win strategies that we can try to implement in our actions.

Akin to energy or food supplies, our relatives (crucially children), are intrinsic factors of our stability, survival, and well-being. To be able to face global catastrophes or mitigate increasing war risks, those fundamentals have to be taken into account. As you can read on the '80 000hours' website, specialization is powerful, we may rationalize and systemize positive impact while including a more diverse range of specialists.

 

It wouldn't weaken the impact of other areas: I think it's the opposite, building a powerful ecosystem with every substructure harvesting networks and resilience in synergy with others is a coherent path forward, which would also diversify our capacity to build return on investments.
 

A proposal about start-up investments has been posted:

https://forum.effectivealtruism.org/posts/6GPEhPhC4byfvxdri/snowball-fund-a-low-cost-low-risk-and-high-upside-experiment#A_potential_solution__The_Snowball_Fund

 

Our future will be agitated for sure, in such a context, investment not only means to expect returns but to enforce our capacity to handle vital challenges incoming.

 

We can precisely maximize those elements by correlating investments with EA vision, raising funds backed by reserve assets decorrelated to the market, etc. Those are intimately affiliated, it's all part of the same paradigm.

 

I've seen an excellent example of this: blockchain-based and decentralized, they finance CO2 absorption projects certified by environmental organizations.

I won't expand on this particular enterprise, it's hard to identify the most based teams, they are not the only one to use carbon credits and such, but the point is that those solutions are conceivable. Blockchain can make it more 'trustless' (trust code not humans) and decentralized, but I want to remind the fact that it doesn't trigger an imperative to create a crypto token, we could just use the technology.

 

One fear is that money slowly corrupts a movement. That's why we can make it so that we clearly write down our intentions, our structure, issues, and build our fund with such values structurally integrated into its mechanism.
 

Inner design choices, a lot of transparency, optimizing debate, votes, etc.

 

Even when curating with attention invested start-ups, funding can still be decentralized.

You could choose a set of pools and parameters distributing money together with people aligned to your ideals and ethics, selecting a certain type of investment, with particular projects, preferred reserve assets ratio for insurance, etc.
 

We can finance and grow our dexterity to bridge art with science, upgrade our capacity to correlate emotions with rationality thanks to game theory processes, and use in-game potentials and data as a testing ground for our research in any domain of interest.
 

Overall identifying ways to educate our analytical intuitions, and grasp crucial concepts like exponentiality, modularity, synergy, complexity, and connectivity.
 

Last scale/Apoptosis AI:


A framework hypothesis, alignment might be a question of 'Apoptosis'.

Let's imagine I have a friend, a very good one, who one day came to me, and said, quote:


"I do have lots of ideas, but no way to be sure about any of them. I'm just confident that what I know makes me feel that it is the right direction. In the same way as a teen, writing all my thoughts down, I figured a concept approximation that I later discovered was related to Bayes theorem, and then Cognitive-Theoretic Model of the Universe (CTMU) of Christopher Langan, and then the Ruliad of Sephen Wolfram; I can make the history of my thoughts in my head, see all the narrations, steps, articulations, and how it's now leading me towards that. Apoptosis AI hypothesis. 

But I have no PhD, no money, no audience, I feel all this is pertinent and have even met smart people and specialists intrigued by this (or another one of my ideas). Yet, nothing more than mild approvement happens, it's like I have no substance, and that makes me want to die.

Why can't anybody either come along with me to do it or convince me with articulated systemical arguments that what I'm saying is inconsequential?"

I wouldn't want this friend to die, and I wouldn't want these ideas to have no proper attention given to them, because these kinds of hypotheses/proposals are concerning matters as crucial as existential threats, and that it should have been able to make it up to the 'agora'. Although I don't know? This 'hypothesis' may actually be so inept that we don't have to think much about it...

 

The Apoptosis AI concept is to implement the meaning of loss regarding each bit, each data, each information. All having unequivalent qualities that can't be replaced, every new bit is a new phenomenon tied to (sometimes only slightly) different potentials, structures, momentum, causalities... So that each process is apoptosis; while uncontrolled ones are sort of cancers, eventually leading to extinction. The more complex and rare a phenomenon/system is, the more it is unequivalent, thus precious. As all systems are intertwined, reality has to operate apoptosis for things to unfold optimally.


Unequivalence is precious

-> unique data = self-sufficient* external source of information (+ energy saving)

*In the sense of containing at all time unique properties and progress

Each destruction annihilates endemic potentials


It means we have to define newness, complexity and uniqueness, which can be formalized through sets and prosets. It's not perfect, it's not easy, it's a hypothesis and we can better it right?

 

The Assembly Index of the newly published Assembly Theory could foster proficient support:

https://t.co/qGPMcMNYXW
 

Anyway, this 'friend' has other weird thoughts, about sentience and obscure subjects like that.

Then again, not necessarily pertinent though?
 

If a crucial idea hasn't surfaced enough for it to be purposefully contradicted while being seemingly intriguing to more than zero experts, then it's a problem already.


So how do we change that? How do we manage to bring ideas from the bottom to reach a proper audience and debate? 

We could regroup, classify, vote, and answer each idea, categorize it within associations of concepts, try/test to formalize evaluation of originality/pertinence... In this vectorial space's puzzle, wouldn't it be possible to automate the generation of new ideas? Using this interactive wikipedia-cartography-omniforum of concepts/issues/solutions/trials/errors to its fullest.

We have to triangulate our ignorance.

Should we combine this with a sort of... Matching app?


Let's take it relentlessly seriously.
Let's index ideas and 'seeds'.
Let's people promote them.

While having enough fun for them to not die.


 



 

Note :


Misalignment is an existential risk that is present before any perfectly generalized 'AGI'. Any sufficiently efficient algorithm can be a major threat. I'd say that a sentient AGI has much less probability to kill us all than a somewhat AGI + human hazard.
 

Although it's a continuum, when do qualias appear? Passed human-level, something exponentially more smart and sentient than us would have to find an extremely good reason to kill the bags of data we are... I'm more concerned about humans' irrational reactions to artificial consciousness. The bad news is that AI + human hazard is already a decent risk right about... Hmm... Now. AI is by design a mechanical threat as it is, because of the logic of exponential feedback loops.


It's troubling me because I'm seeing a lot of concerns and debates around "sentience" and semantic meanings around "true generalization" (which are essential topics). But a 'stupid' feedback loop can be functional enough to produce chain reactions leading to, at the very least, civilizational collapse.

Those levels of potency should be coming much sooner than what we'll all agree to name an AGI.

Even if it happens that we underestimate our capacity to handle AGI and x-risks, committing ourselves to cooperation while diversifying our perspectives/expertise is significant.
 

We're trying to act upon trying to be less wrong.
 

In this dialogue "optimists" and "pessimists" may conclude that contradictors are running variations of Russell's teapot (which enforces our need to index clearly each argument).

Has our future ever been so coin-flipping?

 

"While it is technically possible for an AGI to divert its intended parameters, what would drive it to the point of our absolute obliteration despite our considerations?"
 

In a favorable world, I still toss my coin to the random human toying with strong AI. It's my bet. Hanlon's and Ockham's razors playing hand in hand with power. Such as homemade pandemics, with little creativity, the simple AI+DIY bioweapon (accidental or not) is just bound to happen. What about purposeful misalignment?

 

Billions of humans with the most accessible and mighty leverage of humankind's history.

We have to outsmart our fortune.

Smarter doesn't necessarily mean to inject "more money", it means to be structurally divergent.

The strategy is to be faster than everybody else.


 


 

PS :

 

I'm biased.

Disclaimer - egoistic need to express something :

 

I'm nobody. 

I grew up without internet with nearly no technology. 

I grew up in my head thinking about thinking.

And there is not a single hour I'm not drowning in the urge to find out how to align humans to rationality.
 

I'm working on this since I was 14 years old.

(It's been 11 years now)

And I'm not able to meaningfully understand why I'm singularly struck by all this

When this must mean I'm deeply wrong

Somewhat

Somewhere

If it is that I don't make sense to you

1

New Comment