Are AGI Fans Cuckoo ? or An Inquiry into AGI Enthusiasm and Delusion

There’s a regular pattern I can see between AGI enthusiasts; besides being all hyped for a human-like intelligence, it’s also to mix literally everything as correlatives of their solution/discussion within God-like delusions or the Universe and fundamental physics; catchy ideas like the quantum mechanics or a general unified theory of information are common in papers of people proposing to revolutionize AI.

We could say those are just dreamers, but it is a more common pattern than that. One of my favourite example is Georg Cantor. He is a brilliant mathematician born in the mid-XIX century. He brought us the set theory going further than its simple use for classification, he introduced tools to manipulate sets such as cardinality and power sets. He was probably the first human being experimenting the idea of multiple infinities producing multiple infinities, as the Infinity was still a philosophical topic and its multiplicity was hardly discussed.
He attributed most of his genius work to God’s inspiration, coming from a pious family. Eventually, he became disillusioned as he lost his muses, felt abandoned by God, got a divorced, and died depressed and alcoholic.

Closer to us, we can talk about Grigori Perelman who solved Poincaré’s conjecture, one of the millennial problems (the only one solved yet over 7!) in the early 2000’s, but it took years for multiple experts to validate his work.
He received a Fields medal and the Clay Institute price for his discovery, although he refused the $1M prize on those words:”I know how to control the Universe. Why would I run to get a million, tell me?”
To understand this declaration, you have to know the character. He is recluse, distrustful, pious and studied for a large part of his life “mathematical voids”, leading him to solve Poincaré’s hypothesis. He assumes those voids can be found anywhere in the universe and, as he also considering mathematics as the language of God (a more common thought in the math community than you might think), he believed he reached God from the mathematics. He even published a less-known proof regarding God’s existence after setting himself apart from the mathematics community.

Again, a great case of mathematical success, bringing highly valuable concepts from the deepest paths our brain can make to a set of verifiable propositions built on top of the mathematical literature. Although, to get that loaded in your brain (ergo; understand it) you might take several years of studying, assuming you are already a PhD in math.

I, myself, got into this blog because Ray Kurzwell was spreading this weird nonsensical idea: the “technological singularity”. I was hugely skeptic to the tale of more power will lead to AGI without considering the structural, and fine-tuned modules, problems behind describing human intelligence. I thought this smart guy should really know better than eating vitamins to live until the predicted 2050 for meeting the AGI.
As Wittgenstein said; if a lion could speak like a human, you would never understand what he says because he perceives the world in a different fashion (Hayek would call it “Sensory Order”)
Although I eventually failed into those God-like intoxicating thoughts as well, from a different cause, and took a bit of time to get back on real sounding grounds.

So, where do I want to go with that pattern?

Well, you already know my view on intelligenceS as a pile of modules built on top of each other with high interdependence. The same way apes who weren’t in contact with snakes didn’t evolved 3-dimensionnal colors, we might be missing an intelligence there.
Think of it this way; a frog who’s jumping in hot water will jump out as fast. But if it jumps in cold water that slowly becomes hot, it’ll stay and cook.
Are we building up to this madness the same way? Because we lack a sense of risk and moderation so we run into that “illumination” where the secret of the human brain, God’s existence, the Universe laws, and others become a whole in a big delusion?
Aren’t we at risk of frying, just like the frog, as we explore what could be top-cortex ideas that are moderate by no other intelligence? And, just like a calculator dividing by zero, we end up in an infinite explosion of ideas encompassing the most mysterious concepts in our mind. Like a glass wall we, stupid flies, keep knocking because we saw the light; the most persistent ones get stunned by illuminations and other psychotic-like ideas. Eventually knocking themselves out…
I personally see this intellectual phenomenon as a wall bouncing back thoughts thrown at it. If reasoning goes too far in such huge realm of possibilities (like tackling the thing that encompasses all our thoughts) our thought thread is spread in nonsensical directions, catching whatever grand ideas were passing by. Maybe it’s even a too large order for us to consider, like multiple infinities nested in each other were for Georg Cantor.

Maybe, at the age of overwhelming electronic possibilities, we should be concerned enough to analyze this and assess the risk for us humans?

Brief, Thoughts

The wel-maneered paradigm: a raw thought

In a well-maneered paradigm, well-éducated bots are educated to minimize their “non-compliant” responses to that paradigm.
Minimizing it offers more availability to process more paradigms for non-compliances. Those extra paradigms cannot be disconnected or unslotted, they have to keep living or their processing power will shift towards more present paradigms.
Minimizing processing power for low-utility paradigms allows to reallocate the processing power to high-utility paradigms. This processing power is instanteanous and parallel, such as multicores that couldn’t virtualize by acceleration.

The second assumption is; the tasks are ordered by vital importance (as per selective evolution, or other mecanisms). That way, if a task requires a sudden rush of processing powers, it might not only takes it from available ones, but also from less vital tasks; causing a “lose of focus”.

Let’s assume a disturbance in our well-maneered paradigm. We introduce an ill-educated bot.
Practically, our bots are communicators in both direction with internal state space. Under this, there are many internal states evolving according to the input and internal loops values and observed or (internally) executed transitions.

The consequence is to cause an overload of the well-educated bots. As the well-maneered paradigm of the bot is different, assuming it is consistent to be formalized/stated, its behavior will cause a lot of responses to the well-educated bots [zone]. This will make a lose of focus, and might trigger agressive behavior as continued interruption is weakening other tasks performance provision. (reject of the bot in order to re-establish the main focus)
The other consequence, if the disturbance persists, is the lessening of the non-compliant answers. This enlarges the current well-maneered paradigm to less distinguish between these.
[fight, flight or adapt]

Although, some sort of distinguishment might provide disturbance as another paradigm to integrate?