SERF

Initial status #1

It’s been a while since I started searching around for an approach that will split the formal and informal between a machine;

  • What could be caught by Semantic, like what we agree to define strictly between ourselves (made of a pronunciation, a written form, a lexical field and a certain pattern of interpretation that multiple individuals can agree upon)
  • What could be caught by Experience, things that can’t be expressed, or only partially, with common nouns and quantifiers, usually due to the inexperience of other experiences to agree upon semantic

 

I started it in a structured layer approach, naming it RAES for RAES Are Evolving System. And started trying to consider Semantic and Experience a part: as levels of abstraction having different purpose but assuming Semantic is build atop of Experience, out of outbound agreements.

For the AGI guys: I express the idea of event memory bounded by the current interpretation capacity, built itself on top of the richness of Experience.
An example; as a kid, I always identified some bed sheets as a face with a huge heart-shaped nose. After finding them back as an old teenager, I still could see the face, until my vision started to adjust on a couple of swans kissing each other  while their necks were making the heart shape.
Details of the abstract representation weren’t yet in my conscious Experience to understand the eyes weren’t those of the pillow but of two swans.
As my interpretation was wrong, I could have beeen considered in a “confused” state of mind.

So, ok, it should be represented by a system on which everything grows on its own but is built on top of another layer. Thinking about the different functionalities a layer could be taking care of, it started to seem more and more confusing as this doesn’t define much.

Having this way the Experience enclosed between input and semantic would be more as a closed loop system representation in order to develop specialized controller that would either match semantic interpretation with input, or the other way around.
A logic based approach realizing a similar controller for semantic between user and experience was foreseen, as a way to match reality with expectation on higher level abstraction.

Having high-level representation (with semantical meaning, like xml encoded), low-level representation (with set ordering, like fuzzy clusters) and the relation between both of them, this could have been seen as a consistent first approach with the split I wanted to achieve.

So I wondered about how “naming” capacity was acquired; as a fundamental way to tell a part self-consistent subset of data. And I started multiple relational approach. From this layered approach, I started with the semantic as the final input interpretation, the place where the name should be stored and matched with multiple experiences; though the interpretation of what a “cat” is, should be unique in the system, without more context given.

From there, I assumed that for an unique semantic interpretation of experience, there should also be an unique experience, based on an unique instant, as each situation is in the context of a past knowledge that keeps renewing itself for an observer.
After giving a superset to each layer, some adjustment iteration of the model was in order.

Maybe it needed to really consider semantic and experience as different “things” living a part but being able to communicate on both direction. Though I would consider they don’t work on the same kind of data as to really be disconnected units that are matched according to the instant.

For the AGI guys: the representation from the sensors and the representation from the mind are reasonable mismatch. Like assuming the cat from a picture of his tail, observing more violent behavior of an individual due to past experience, building the image of the invisible man from what moves around in the movies, knowing under which pot is the ball,…

Therefore I considered the metadata, and the data, as, respectively, the input object of the semantic layer, and the experience layer. Also, Semantic layer would contain Meaning objects, that would encod and represent the MetaData, as well as the Representation objects, doing the same in Experience layer for Data objects.

But considering that Data is a self-consistent object wasn’t satisfying; as data content have both semantical and experienceful levels of reading. I had to realign on the data as the center of both layers.

This moment, I started to shift a bit the representation. Semantic and Experience were global layers, made of sublayers, made of objects following a given logic. Not sure at this point what could be a concrete use case for this, but my focus was still on a way to properly split the yellow from the white in my egg.
The logic is more of a core approach; data is the most important element of the unit stack; it is generated by crossing meaning and representation, but those should be encapsulated with another data layer expressing the data in their own encoding language. Called “MetaDescription” for Meaning and “MetaData” for Representation. Both directions became from the inside to the outside, and reciprocally.

 

But now that I was reaching something consistent for the stack, the question got raised; how the hell to use that stack?!

Multiple approaches were though to interact with it, to the point it became confusing.

At this point, I was considering the unit stack like one of those cortical columns that I still have a rest of interest in after reading On Intelligence and Numenta white paper. So I thought of it as a hierarchized bidirectional data flow, with the top loop being another layer, called Programming, which will feedback things like logic rules, loss function, inference conclusion,… based on what the premisces of the unit stack are.

The Programming layer is defined as “How the system should behave knowing this”, the Semantic layer as “What the system interprets it as”, the Experience layer as “What the system compares it to” and the last sensor/motor IO layer as “What the system really sees”.

That was satisfying up to the moment I came back to my previous expectations and figured out that, well, there should be plenty of developers from all the kind on this: UX, Data, Programmer, Logician,… as I also put on the diagram that the Semantic layer could be access from a graphic query language or the Programming layer should be externalized code logic.

It started to be an overbloated stack, basically the unit was now a large complex stack approach. I got confused on the way to rationalize this and moved back a bit to a more modular approach, trying to break that up, and to come with small units that could be based on a P2P network and have the information cloud servers-like encoded through all those small units, I comparabed to threads running in the background.

But, even if I try to split this into more generalized small units, that interact with each other, how would I replicate the functionalities of my higher-order pattern of the relation between Experience and Semantic ?

2 thoughts on “Initial status #1”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s