Uncategorized

From Cynical to Reasonable

I started that journey 5 years and half ago, I believe? Or was it 6?

Early on, I’ve been caught by the IBM Watson showcase, and, as I was looking for a moon shot project, the conclusion was easy: “I’ll be the one making the next significant step in A.I.”

Well, quickly I figured out those deep neural nets were just statistical classifier. I started to look for a deeper truth: starting from neurobiology, to go erratically between math, philosophy, psychology and back to neurology… Well, it was just wondering around about a deeper truth that I could add to the stack of AI, with or without deep learners.

But math is a selective creation based on deeper truths, neurobiology is just description, psychology is built on moving sand making it hard to distinguish speculation from truth (e.g. Stanford experiment) and assert on their metrics, neurology doesn’t know what to measure and what to account for, they’re still trying to find the most suitable tools, philosophy has been solved by Ludwig von Wittgenstein in a Gödel’s incompleteness fashion.

Well, there are my conclusion: I have no large truth to seek for, there’s no big answer or great principle. Everything is but a bunch of aggregated adaptable functions, with a quasi-deterministic architecture that gives to everyone a close enough similarity that they can develop language and express ideas on common assumptions.

Within this paradigm, we can make an intelligence that understand Humans. The real challenge of AGI is to copy an unknown architecture from a large variety of samples and a large space of possibilities.

Without this paradigm, we can make plenty of new intelligences but with no hope to communicate efficiently with them and, therefore, bend their behavior in a highly useful manneer.

 

As I’ve seen so many great modern discoveries made by trying to explain the human brain (logic gates, truth tables, automaton, Turing machine, video games UI, etc.), I still took the journey to, myself, produce something great while trying to rationalize the mind in some principles matching my intellectual capacities (a.k.a. chasing the dragon).

I’ve produced many interesting reflections, but no great invention though. I still try to see the peak of this problem, but I obviously cannot grasp it.
Because the solution of this problem is encoded in a higher language, as complex, incompressible and diverse as it can be: this language is the structure and the dynamic of the brain itself.

We’re still at least one evolution away to grasp that problem: so far none can register a full brain architecture as a projection in each functional part of its own brain, and guarantee that those projections are sufficient to describe a brain.

So, yeah, I don’t really feel like solving that problem, even helped with tools like cloud quantum computers, I’m not the one who will ever deliver the solution to AGI and that has never been the point of this blog.
But being a cynical person looking for some nice fish while hunting the mythical one, is really not anymore how I feel I should lose my time.

It finally clipped together, after so many posts talking about a developing platform, I won’t make AGI but I can make the tools to empower great thinkers with ready-to-use simulations; to compare and advert the best results and the people behind those results; to have a wide-integration with today’s electronics; to provide a gamified experience of what AGI development should be from its large possibilities and a sense of wonder; to create connection and community mindset around the AGI question; and, even if we fail to make it, to have something to pass to the next generation and keep the passion burning.

 

So there it is… I’m gonna put together this blog, and mostly a lot of personal notes, to try to produce this AGI platform. It is at a draft stage, so it’s still gonna take a long time before a first release.
Though the Wright Brothers didn’t make it because they were the most brilliant engineers, but because they had a huge fan and strings to quickly test ideas.

Math, Thoughts

Growing Relations, Agents, and Intelligence

By analyzing a behavior, from its input to its output, using metrics we can write down a timeline mapping of the behavior.

Making a timeline mapping is the dumbest way to analyze a behavior, it’s also its first step.

Refining a relation

Through mathematics, and still beyond it, there exists many ways to compress a timeline mapping of inputs against outputs. This can start with a reduction of the time: as a parameter, a Bayesian causation, a stream for statistical analysis, etc.

Maybe your observed behavior is independent from time? Then, is it deterministic or non-deterministic? Is it symmetrical? Is it bijective? Is it linear?

I mean, really, the space of classification of relations between inputs and outputs is quite large and can become as abstract or specialized as goes the mathematicians imagination. When you first dig into those kind of domains, diversity exists… ad nauseam!

The compressed relation

So that’s cool, we already have a huge library of what functions can be.
Though, in practice, some methods can make parameterized functions give a good approximations; those play on the function parameters (gradient descent, Ziegler-Nichols,…) and the encoding space of the I/O (Fourier transform, Z transform,…).

Outside of the scope of perfect precision mathematics, real world technologies are designed with reasonable approximations over metrics. Expressing a behavior with a well compressed relation is always a success, but it never means we can conclude our relation perfectly fits the observed behavior. Even if precision is good enough, we can never consider that the behavior will always be predicted by the system.

But we do technology, so we’ll make approximations suiting our assumptions, to avoid being stuck in a philosophical idea of perfection.
It means we have to care about both the metrics and the fitting of the applied relation, up to a reasonable limit.

That way, no need to know everything, but there’s a trade for approximation; it’s specialization. So most of us assume it’s fair to consider there exists some generic rules that lead to intelligent behavior: linking general high and abstract point of views (e.g. fairness, love, round, blue, accent, etc) to specific low details point of views (e.g. logic circuit, pattern recognition, graph representation, design pattern, periodicity, causality assumption, arbitrary mapping, etc).

But is there really any simple set of rules?

There comes the Concept

From good-enough approximations of relations based on highly-customed parameters and well described spaces of inputs and outputs, we could theoretically mimic any behavior, or at least bruteforce it. There’s no objective test, meaning expressing measurable expectations, that we cannot put metrics on and figure out an approximate mapping of.

The issue remains; it’s monotask AI, not AGI.
But if this mimicking machine is capable of developing and selecting multiple ways to produce high-level outputs from its low-level inputs, those can start competing with each others, and a context orchestrator will have to apply a selective intelligence to make a conclusion out of the myriade of responding, or not, processes.

That’s the huge work of AGI: generating a proper focus to get the information related to context, and switch action and goals as you switch context.

In the opposite direction, the high-level concept should lead to low-level actions. So the high-level idea should be convertible into a low-level sequence of commands for the motors.

This dense mapping leading to a high-level instance, an integration of low-level sensor inputs and a derivation to low-level motor outputs is called the concept. It contains its input and output domains, its high-level states and its experiences over its processing: local relations (caused by memories), distribution (if statistics required), priors (if Bayes required), latent space (for generation and operations), etc.

The sigma machines

So a concept is this specific entanglement that allows us to grasp a general idea from its details: a character keeps its meaning despite the font, ambient light, inclination, style, color, etc. It stays the same concept.

And, as a concept corresponds to multiple I/O configurations, there might be multiple ways to implement a high-level concept to a low-level I/O.
For instance, I want to match the concept of “blue”, but I don’t have a specific I/O to fill that needs. Though, after observations, I noticed I have a low-level input responding to the concept of “dark blue”, and another  one to the concept of “light blue”.

The simplest turing machine I can think is just an OR operation on those 2 inputs to produce a good approximation of “blue” detection.
If my inputs were less convenient though, that problem could have become much harder to solve, and a conditional mix of relations could have been used. Each of these solutions, that can produce the expected response under approximations, are sigma machines.

Of course, they can vary in precision, resolution, relevancy, border errors, domain distribution, bandwidth, etc. Therefore getting the contextual information right is important to use the correct sigma machine for the job. But if one can be proven the most efficient to encompass a concept, it is the simplest turing machine of the concept.

Could there be a simplest Turing Machine of all the Concepts?

And that’s where I wanted to arrive; if a concept can describe a general view of anything; could there be an unknown concept of all the concepts?

Because, if such a thing exists, one of its sigma machine could lead us to get all the concepts right from the beginning. Some sort of all-knowing algorithm that should lead us to the god-like singularity, right?

Well, to get to the singularity, we need to have at least the concept encompassing all the concepts; a set of rules applied to all those concepts existence. So could we plot a space of concepts regarding something higher than I/O? (a sort of logic applied to each concept)

I thought I could prove easily there’s no such thing as the concept that encompasses the space of concepts, though I cannot prove or disprove this at this point… I guess, I will have to come back later on that topic.

Though, really, if you have suggestions; please leave them in the comments to open the debate; I’d be curious to see if others think they can prove this.

 

And, as a bonus, here’s the sentence that inspired me this article:

The conceptual space is the space containing all the concepts. Therefore, if there exists a concept that is not part of it ; and if the conceptual space would have encountered that element, it’d start by wondering if this element is a concept. As per the premise, it’s already identified as a concept, so it’s already part of the conceptual space. Therefore, the conceptual space encompasses all the concepts.