Math, Thoughts

Growing Relations, Agents, and Intelligence

By analyzing a behavior, from its input to its output, using metrics we can write down a timeline mapping of the behavior.

Making a timeline mapping is the dumbest way to analyze a behavior, it’s also its first step.

Refining a relation

Through mathematics, and still beyond it, there exists many ways to compress a timeline mapping of inputs against outputs. This can start with a reduction of the time: as a parameter, a Bayesian causation, a stream for statistical analysis, etc.

Maybe your observed behavior is independent from time? Then, is it deterministic or non-deterministic? Is it symmetrical? Is it bijective? Is it linear?

I mean, really, the space of classification of relations between inputs and outputs is quite large and can become as abstract or specialized as goes the mathematicians imagination. When you first dig into those kind of domains, diversity exists… ad nauseam!

The compressed relation

So that’s cool, we already have a huge library of what functions can be.
Though, in practice, some methods can make parameterized functions give a good approximations; those play on the function parameters (gradient descent, Ziegler-Nichols,…) and the encoding space of the I/O (Fourier transform, Z transform,…).

Outside of the scope of perfect precision mathematics, real world technologies are designed with reasonable approximations over metrics. Expressing a behavior with a well compressed relation is always a success, but it never means we can conclude our relation perfectly fits the observed behavior. Even if precision is good enough, we can never consider that the behavior will always be predicted by the system.

But we do technology, so we’ll make approximations suiting our assumptions, to avoid being stuck in a philosophical idea of perfection.
It means we have to care about both the metrics and the fitting of the applied relation, up to a reasonable limit.

That way, no need to know everything, but there’s a trade for approximation; it’s specialization. So most of us assume it’s fair to consider there exists some generic rules that lead to intelligent behavior: linking general high and abstract point of views (e.g. fairness, love, round, blue, accent, etc) to specific low details point of views (e.g. logic circuit, pattern recognition, graph representation, design pattern, periodicity, causality assumption, arbitrary mapping, etc).

But is there really any simple set of rules?

There comes the Concept

From good-enough approximations of relations based on highly-customed parameters and well described spaces of inputs and outputs, we could theoretically mimic any behavior, or at least bruteforce it. There’s no objective test, meaning expressing measurable expectations, that we cannot put metrics on and figure out an approximate mapping of.

The issue remains; it’s monotask AI, not AGI.
But if this mimicking machine is capable of developing and selecting multiple ways to produce high-level outputs from its low-level inputs, those can start competing with each others, and a context orchestrator will have to apply a selective intelligence to make a conclusion out of the myriade of responding, or not, processes.

That’s the huge work of AGI: generating a proper focus to get the information related to context, and switch action and goals as you switch context.

In the opposite direction, the high-level concept should lead to low-level actions. So the high-level idea should be convertible into a low-level sequence of commands for the motors.

This dense mapping leading to a high-level instance, an integration of low-level sensor inputs and a derivation to low-level motor outputs is called the concept. It contains its input and output domains, its high-level states and its experiences over its processing: local relations (caused by memories), distribution (if statistics required), priors (if Bayes required), latent space (for generation and operations), etc.

The sigma machines

So a concept is this specific entanglement that allows us to grasp a general idea from its details: a character keeps its meaning despite the font, ambient light, inclination, style, color, etc. It stays the same concept.

And, as a concept corresponds to multiple I/O configurations, there might be multiple ways to implement a high-level concept to a low-level I/O.
For instance, I want to match the concept of “blue”, but I don’t have a specific I/O to fill that needs. Though, after observations, I noticed I have a low-level input responding to the concept of “dark blue”, and another  one to the concept of “light blue”.

The simplest turing machine I can think is just an OR operation on those 2 inputs to produce a good approximation of “blue” detection.
If my inputs were less convenient though, that problem could have become much harder to solve, and a conditional mix of relations could have been used. Each of these solutions, that can produce the expected response under approximations, are sigma machines.

Of course, they can vary in precision, resolution, relevancy, border errors, domain distribution, bandwidth, etc. Therefore getting the contextual information right is important to use the correct sigma machine for the job. But if one can be proven the most efficient to encompass a concept, it is the simplest turing machine of the concept.

Could there be a simplest Turing Machine of all the Concepts?

And that’s where I wanted to arrive; if a concept can describe a general view of anything; could there be an unknown concept of all the concepts?

Because, if such a thing exists, one of its sigma machine could lead us to get all the concepts right from the beginning. Some sort of all-knowing algorithm that should lead us to the god-like singularity, right?

Well, to get to the singularity, we need to have at least the concept encompassing all the concepts; a set of rules applied to all those concepts existence. So could we plot a space of concepts regarding something higher than I/O? (a sort of logic applied to each concept)

I thought I could prove easily there’s no such thing as the concept that encompasses the space of concepts, though I cannot prove or disprove this at this point… I guess, I will have to come back later on that topic.

Though, really, if you have suggestions; please leave them in the comments to open the debate; I’d be curious to see if others think they can prove this.

 

And, as a bonus, here’s the sentence that inspired me this article:

The conceptual space is the space containing all the concepts. Therefore, if there exists a concept that is not part of it ; and if the conceptual space would have encountered that element, it’d start by wondering if this element is a concept. As per the premise, it’s already identified as a concept, so it’s already part of the conceptual space. Therefore, the conceptual space encompasses all the concepts.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s