SERF

Initial status #2

Last time, we ended up blocked by the nice concept of splitting Semantic and Experience from the data in self-consistent micro-units, but got blocked by the growing complexity of those units. Stuck on this problem, I had to approach it another way around.

For the AGI guys and else: Let’s take driving for instance; we naturally all act as regulators in a closed loop system. The bands are divided in a certain way and we’re expected to stay in between; the visual inputs confirm the driving outputs. For acting this way; we need to specialize part of our bodies to a scope of given functionalities, or behaviors, but we need to do the same for the visual inputs, as we keep vigilant watching the road, which we usually call Focus.

To accomplish this, there are data flows and data structure modeled as a closed loop system. Therefore I introduced a distinction between data logic and processing logic; only the first one is in scope of this approach, the other one is externalized to scripts, services or system calls.

Considering the way to model a closed loop regulated system with simple units; I considered them modular and bidirectional. How could they be used to represent those data? Well there are some MetaData (like encoding or a bit of semantic) and the corresponding Content (bits making sense with a decoding pattern) that needs to be moved from a low level of understanding (unit is mostly content) to a high level of understanding (unit is mostly metadata), like a digestion process of information.

This way, we expect to achieve a continuous transformation from experience (almost only content) to semantic (almost only metadata). Instead of focusing on a dual classification, we want simple units that achieve the state shifting by building the dataflow transformation, and getting as many end points as we have of units to formalize this idea in a simple and modular architecture.

In an autoencoder way of thinking, to keep bidirectionality and consistent conversion, we will have to think of the error as the difference between what gets in and what gets out of our data buffer unit, compared to the output unit. An externalized logic ensure the conversion, but the data consistency requires other blocks such as one for the Error calculation and one for acting as the Controller of the system. As it’s a process which requires data orchestration (basically clock binding), a controller for both units is also required and they need to exchange information with the process controller.

Did we just shifted the complexity? At this point, the system is just made of units, abstraction of externalized process calls, and data logic attributed to units. We have an orchestrator based on process representation and data fields encapsulated; but we can apply a logic to those fields, as to complexify the behavior of our units.

For instance, we can define the DataBuffer with a MetaData;XML field and a Content;SystemFormat field and expect the DataOutput unit to be defined by MetaData;FirstOrderLogic, BusinessRules;ImperativeProgram, Content;CustomFormat for fields. We orchestrate, as we know what we are calling, and we have a contract on what we pass and what we expect.
Also, supposing we have backpressure, the controllers can be supposed independent.

 What do we have here? An Unit that encapsulates different DataLogics which are orchestrated and encoded to an externalized process and, finally, decoded and dispatched to other units.

We’ve got, in short, an unit which produces, at event time, an instance of the dataflows it receives but constrained by data logics  attributes, which are self-containing their own logic rules, and a map connects those fields output flows with a transition logic (weight, threshold, event,…) to something else. This something else being either another unit attribute or a process logic which uses externalized, or system defined, conversion processed from a unit A to a unit B enforced by the contract.

As I was wondering how far this could be abstracted, while still being implementable, many ideas emerged. Such as how far enforcing of fields could be done? Could we use AI to go as far as matching face structure? How could the logic rules be implemented efficiently knowing everything that could be expressed on different abstraction of mathematical logic languages? Should I go for some metagrammar like Backus-Naur Form to define data logics? Or some state machine process?

 

 

Well this was still confusing to me, so I used those building blocks to try something else. I wanted to see if the weights could be used to model a general approach of bayesian/frequentist treatment of a situation, but using relative clock times of the units to simulate the weights.

We have all those units, that will be producing an image at their own clock time period, or event-based, and we need to find which value is the most probable one.
For instance; defining the name of a color. Each unit is the output of a given value (color) and a central operator, winner-takes-all, ensure only one value is allowed to pass. If the threshold is count sensitive, we can think of it that way; during 2 clock period where the buffer of the target unit operates, which should then be much larger than the cluster of other units, we have different images that will be generated. The image type having the bigger count at the end of the clock period is, with the WTA mechanism, the one who will be transfered to the input buffer of the target unit.

The weights will be handled by an externalized logic treatment to increase or decrase the clock period relatively to the importance of this unit. Then we have the common part of bayesian and frequentist approach; the threshold mechanism after a logic has weighted the values.

Though we see the limit of the model toolbox here; the result is not fully as per expectation, we’ve lost the pattern from the beginning by refusing complexity, and we have weird behaviors such as weighted units by an externalized process running with the clock period as output.

That gives us a new kind of toolbox where units have a varying clock for both the buffer in and buffer out, that would allow to modulate a gain in the image.

Though we also have those new object called “Connector” which are abstracting the business logic but have both an input and output. While our units have now an input and/or an output (according to clock references) and abstract the data logic.

Defining a rich enough toolbox while accounting for single unit of abstraction seems to really lead to not much from our initial goals perspective. Maybe already existing patterns could be a better gateway to summarize our approach, our goals and the expected results ?

1 thought on “Initial status #2”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s