Brief, SERF

Dead Man’s Switch

Hi everyone, long time no see! So here it’s time to debrief a bit about what I was up to and what about the other blog threads I let on pause for a while.

First, I’m really glad to announce that I made a good use of that second lockdown by putting together everything Serf-related (even TreeAlgebra, MetaMesa, LabAware, BrainFarm, openBrainTer,…). This means that they’ll be new material for MetaMesa but I’m not sure how to release it yet.

But the real point is to realize a synthesis that won’t be easily lost.
Life is stupidly short, it has never been designed for people to accumulate knowledge and work on long-term projects; that’s a commodity of modern society and long lifespan. Truth is, a virus or a car could take your life the next day and every long term progression will be lost with it.

As I’m living in the fear of dying anytime, I made such an ambitious project (with this blog as a corollary) to give to my last breath the satesficit of having brought something worth it and meaningful with it. As the risk of death, or being unable to complete this project, is real; I decided to realize some sort of fail-safe which consists of a 2.7GB PDF containing 1604 pages of raw notes messily organized and badly formatted (the pleasure of working with LaTeX from so many sources).

This document is in a mix of French and English, it’s not sorted by themes and could be really hard to understand as I still need to produce a cleaner version (but it’ll come in the long run). Although I have some hopes that publishing it will prevent these notes to become fully lost if anything happens; like there’ll be hope for someone to maybe take over and extract the most interesting bits out of it.
It will be published here in 20years from now on, unless I decide to unschedule it but, even if I get to make a clean version, I doubt to ever remove those notes from publication.
This constitute my “dead man’s switch”.

What about other threads ?

It seems I’m gaining more regular readers those days. Meaning if I keep posting “in a rush” before getting to sleep and rework it the next day because one of my sentence isn’t intelligible or I’m missing the cover picture of the post, the readers who will get them “fresh” will also have a more drafty version of the post. (it even happened that I rewrote almost completely a post the next day)

But this blog is also a raw thoughts realm where I try to keep my pleasure of “open brainstorming” instead of “closed reworking”.
I mean I’m starting to feel compel to be more parsimonious and rigorous in this exercise, to better show what I’m capable of, but that’ll reduce my pleasure and my productivity far from the initial lean philosophy of “build it first, fix it later”.

The thread on LYFE, for instance, has 3 drafts pending and I can’t figure out how to get back on it. I’ll probably need to better understand the Gray-Scott model and to give some thoughts on why removing metabolism, which defaults to only one-loop, the homeostasis one, instead of 2 opposite feedback loops, is an ok model. (in signal processing, you can do a lot of weird stuff, but biological models have a lot of constraints preventing you to play with scales of values)

As I said in the previous paragraph, the Meta-Mesa topic really evolved well those last months and I’d be glad to publish something about it but it’s also part of the work I need to put in synthesis and clean up of my raw notes PDF.
Let’s see how this go but, going forward with this document, I’ll probably have old and new topics to publish about in the coming months or years.

Well… That’s it for now! If you want to see more about that dead man’s switch, let’s meet here in 2040! (don’t forget to program the reminder in your robot butler or your talking self-flying car, obviously)

SERF, Thoughts

Your Glasses Aren’t Prepared !

For a long time, I played around with the concepts of syntax and semantic, trying to make them practical and clearer to me.

Both are approaches to study a message, but syntax is more about structure while semantic is more about meaning.
Syntax is easier to study as it is made over linguistic conventions, such as Chomsky grammatical approach. Syntax is the part of a message that we can all agree upon as the rules are (but haven’t always be) properly stated.

Semantic is what is meant by the message. So its domain is the thinking experience of a human mind. As we cannot model this, we cannot model semantic (or at least, I used to believe it). Therefore the semantic is what’s left in the message once the syntax is withdrawn from it.

Except that there are no clear boundary between syntax and semantic. Self-referential sentences and onomatopoeia are examples of cases where you cannot make a clear cut.
Giving this inability, it didn’t seem scientifically-reliable to use this paradigm and I was therefore looking for something more objective.

I decided to use an approach that was much easier to integrate with digital systems while providing a dichotomy that better fit the problem. Considering what’s widely accepted (like syntax rules within a language more or less mastered by a group of people) is joining a convention to ease communication. And that’s really handy to have something like an origin / reference point (here, a commonly agreed syntax) to explain something relatively to these conventions (in order to talk to a certain public with specific words and images). But outside of human conventions, we rarely can benefit from a reference point in common cases processed through a human life.

Actually, besides really well-narrowed cases such as distinguishing a rectangle from a triangle on a picture, most interpretation problem we encounter don’t have a reference at all to be used in order to develop a comparative description.

Take the case of the sandpile vs grains of sand problem.
How many grains of sand to add to get to a sandpile, or how many to withdraw to just have some grains of sand ?

Then I guess you also need to scale the idea of how large a sandpile is to your expectations.
No referential is universally agreed upon here, although we can make a fuzzy idea of where’s the border between some grains of sand and a sandpile through polling people. That’s a way to extract a convention, or knowledge of representation, and answers would then be about right under some given precision / expectations.
Just like splitting syntax and semantic, that requires the work of modeling the language then normalizing local grammar conventions and words to get to a normalized language. In some languages having no neutral gender, such as French, this grammar normalization got a new impulse from gender issues regarding parity and neutrality of nouns.

Floating Boundaries; reading information through Meta / Mesa lenses

Similarly to considering sand to be either a quantity of the unit (grains of sand) or as a composite unit of its own right (sandpile), we can say that one of the unit (sandpile) is composed of the other (grain of sand).

I established that there can exist a situation where the grain of sand is the most fundamental component element of both the “some grains of sand” and the sandpile.

In a different context, the grain of sand could become the start of my exploration towards the most fundamental component. I could ask: “What is the most fundamental component of this grain of sand in regards of atoms?” and there we will be using a language that encodes a hidden knowledge of atoms, classified and named after their number of protons, like “silicone” or “carbon”. To get more detailed, I could use a language where we even differentiate how those atoms structure themselves, such as “SiO2“, thanks to a hidden theory of how atoms handle electrons.

I could also desire to find something without a giving context. Let’s say I want the “most fundamental component of anything that is” and, if I believe that matter is all there is, then I’ll end up looking for the most fundamental particle or field impulse in the universe. If I consider patterns to be the essence of what there is as a descriptive or a behavioral characteristic of matter, which then is just a support of information, then I’ll look at fundamentals such as mathematics, theoretical physics, computer sciences, etc.

With that approach, you can build your personal ordered list of what’s fundamental to you. Reverting prioritization means looking for the most meaningful/epic case instead of the most fundamental; then you’ll also get a personal classification.

I will call this “outer” bracket the Meta level of reading, and the most faithful one is the Mesa level of reading; because Metadata are data that refer to data, and Mesadata are data that are referred by data.
But those are really just two levels of reading information.
Mesa is trying to be large and detailed enough to be faithful to the source material with significance, accuracy and relevance.
Meta is casting the faithful Mesa representation to a schema connecting or expected by the system knowledge. That is in order to produce an enhanced interpretation of the data that is lossless but more relevant to the context of the process.

The pure-Mesa

This Mesa / Meta levels of reading could be illustrated through colors.
We can agree that, up to a certain precision, the 3 bytes representation of a color is enough to encompass everything that we can experience as a color, let’s call this a “pure” mesa point.
But, if we have to account for shapes, then a single pixel isn’t enough to experience much. It is still a mesa point but not precise enough to capture the shape. We could call it “under-pure” and, in extenso, an “over-pure” mesa point would be something that has significantly more precision than what is relevant to capture from the source material.

Then what is the color “red”? With one byte to each color in the order RGB, #FF0000 is considered red, but #FF0101 is also red as an imperceptible alteration. Is #EF2310 considered red? And what about #C24032 ? When does orange start?
There we are back at our grains of sand / sandpile original case; there are no clear boundary between red and orange.

Actually, the visible spectrum of orange + yellow is not even as large as half of the wavelength band we call green.
A mesa representation (based on physical sensors) can be really different from the meta representation (here a symbolic system where colors are classes, with sub-classes, and a logic based on primary colors (classes), complementary mapping,….
The same can be said about sound, but its logic is temporal instead of combinatorial.

Are there Symbols without Expression ?

Let’s take the number 3.
By “taking it”, I mean writing one of its expressions. I could have written “three” or “the result of 1+2”, it would have required a bit more resolution but the result would be the same.

Have you ever observed “Three”?
You obviously already experienced having 3 coins in your pocket or watching 3 people walking together on the street, but you’ve never experienced that absolute idea of a 3 such as to say “This is him! It’s Three!”. But you might have said this about someone famous you encountered, like maybe your country’s president.

Well, it’s obvious! you’ll claim after reading the preceding sections, my president is pure-mesa; (s)he’s an objective source of measurements present in the real world, so I can affirm this. But I cannot measure 3, I need to define arbitrarily from a measure so it might be pure meta, right?

Well, almost! Your president also has a label “President” (implicitly Human, Earthling,…). This means he embodies function that are fuzzy. There’s no embodiment of the notion of President, just people taking the title and trying to fit the function. Meaning a president is a composite type; (s)he has Mesa aspects from its measurable Nature but also Meta aspect from the schema of President (s)he’s trying to fill.

But is 3 pure-Meta?
I thought for a long time pure-Meta wasn’t a thing because you couldn’t escape representing it, therefore mixing it with a Mesa nature of the representation. So there’s that need for every symbol to be expressed in a way or another, otherwise it cannot be communicated and therefore doesn’t exist. That might be where I was wrong.

My three doesn’t require to become a thing to exist per se.
Through this blog, I proposed to approach the intelligence by modules which usually have a producer(/consumer), a controller and many feedback loops. And 3 has also producers and consumers specialized to recognize or to reproduce varieties of three in our nervous system. It follows that we can recognize, and act according to the recognition of, a property of 3 (elements) without mentioning it.

So, even if I cannot express the absolute idea of a number, such as 3, or a color, such as red, I can at least return acceptance, or deny it, over the seeked property.label (classification) which means I could at least tell if the “red” or “3” properties are true or false in a given context without being able to express why.

Therefore 3 exists both as a testable property and as a symbolic expression, but defined from a property.

Multiple Levels of reading

That’s leaving us with: Mesa could be made as accurate as it is physically measurable, then Meta can be made as meaningful as individuals could make it. We find back that idea of objective (comparative) referential and relative (non-comparative) referential. We could also say that what is Meta has a given data, what is Mesa has a given schema.

What becomes really interesting with that tool is to be able to work, not only between internal and external representations of some entity or event, but also to be able to work between different levels of representation as we grow shapes from colors and 3D movements from shapes.

I believe that modeling the knowledge of a human requires to be able to have at least 2 levels of readings that could be slided across multiple in-built levels of abstractions. One to define the unit, the other to define the composite.

After all, aren’t we hearing the sound wavelength and listening to the sound envelope?

SERF, Thoughts

An Inquiry Into Cognitive System Approach

I recently heard from my director that the technologies and architecture patterns I was pushing for in new projects were too cutting-edge for our practices and our market.
As I tried to illustrate, through past projects, how such technology or such way of designing could have avoided us unnecessary complexity: to implement, to modify and to maintain; I just had an evasive reply.

Truth is; he was right in his position and so was I.
As a business thinker, he was rationally thinking: if I’m not educated in that technology, how can I convince a client unaware of it that it is a good decision to start a whole delivery process for it?
As a technology thinker, I was rationally thinking; why don’t we give more efforts into getting the client problem from the right perspective and apply the right solutions for them, so we avoid expensive delays and can functionally scale?


A simple case would be to define movements in a closed room. You can define them by referencing every place there is to reach, like a corner, the sit, the observer location, the coffee machine or the door. This is easy to list but doesn’t give you much granularity. As complexity will arise, it will be hard to maintain.

Let’s say I want to be close to the coffee machine, but not enough to reach it. Or I want to reach the observer from different angles. I might want to get away from the door location when I open it. And many new cases I can think of to better match reality possibilities.
What was a simple set of discrete locations becomes complex cases to define. As it grows larger and more entangled, complex documentation will be required to maintain the system. Extending or updating it will be a painful and expensive process.

But video games solved that issue a long time ago; transiting from the interaction capacity of point-and-click games, or location-based static 3D cameras, where only discrete views and discrete interactions were available; to composite spaces assembled by collision detection and rule-based events. It’s a way to automate a level of granularity that is beyond human computability.

Think of the squares on the room floor. What if I define them to be the set of discrete locations; preferring mathematically defined locations instead of manually defined locations.
What could you say about my previous locations that are human-readable, like being able to reach the coffee machine? Both in human assumption and video games; it is a question of radius around the detected item. The spatial approach gives our set a relation of ordering over the set; that relation allows us to define the concept of radius as a relation from our domain set to a subset of “close-enough-to-the-coffee-machine” locations.


The other good part with this approach is that we don’t need to formalize its target subset; as long as we have a consistent relation, we can define if it eventually is, or not, in reach; and with the degree of certainty we want to apply if it’s an iterative function. We don’t need a long computation to formalize everything: the capacity of doing so, with much or less computing efforts, is good enough to give a definition.

Why would I say that? Because of precision.

As we started to mathematically define our room locations, we didn’t formalize how far were we going with that and some uncertainty remains. Should squares be large enough to encompass a human? Or small enough to detect its feet locations? Maybe down to little pixels able to see the shapes of his shoes? With great precision?

This is granularity, and it should be as granular as the information you’re interested by, because granularity has cost and complexity associated with.

From our various square sizes, we have the idea that the invariant is: there are 8 available locations from any location unconstrained by collision. So it seems the computational complexity stays the same. But don’t be a Zeno of Elea too quickly, it is impacted.

The first impact is the real move is indifferent from the scale of measurement.
The second impact is the chosen scale can be non-relevant to the observed phenomenon.

Going from a location A to a location B won’t change the actor behavior but the number of measured values will explode, ergo the computational power and the space complexity to track the actor. If you decrease the size of the squares, you get more accurate measurement but much more data to process. If your squares size increases, you’re not sure anymore whether the person can reach the coffee machine from its location. You need to go as far as information is not noise yet.
At large, it’s a matter of finding the optimal contrast at each scale of the information.


What we have seen here with the room floor are 3 level of scales:

  • with raw discrete locations, more on a human-functionally readable level, but limited like point-and-click games, or preset camera locations.
  • Then a first granularity that could help us detect roughly locations, think of it as a 2D RPG old-style.
    On it we can start applying spatial logic, like a radius, to know which locations are in range.
  • Finally a thin granularity level that allowed us to go as detailed as to describe the footprint of the actors in the room.
    That level of granularity is more common for 3D texturing in modern video games or machine-engineered electronic devices such as touchpad or display screens. Every sensor, every pixel, has its transistor (up to 4 in a colored pixel).
    When you get down to that level, either everything is identically formalized or you are in big troubles.

My point being; the problems of the markets haven’t changed and probably won’t. We’re starting to make mainstream the technologies to deliver a transition from point-and-click to 2D RPG. In some specific cases, we can even start reaching for thin granularity.

We can foresee that low level of granularity will benefit high-granularity technologies.
But the shift of business perspective; regarding how problem can be solved with more data, more granularity, better decoupled technical and functional levels of interpretation, and so on; is not done yet! Although it’s just a shift of perspective away from us to pivot to this low granularity way of thinking our old problems and old projects; our approach where we claim unfitting architecture pattern, process and a global lack of granularity and decoupling .

There are opportunities to unravel and technologies to emerge as we will move towards such a transition in our traditional systems. We will have to work on more complex mathematical formalization and automation to unleash a wider realm of implementable functionalities.
We can deliver better functionalities that we will use as the public tomorrow. We just need to find this more granular way in our projects, where we could build higher added value.

The tools are there, the first ones to shift their perspective will take the lead.
Besides, the fun is to set up granularity, not to improve it.


The Day I was Over Optimistic …

Ok, this title might be really hard to get if you don’t have my notes in front.

Though, this Sunday 30th June around 5pm, as I was thinking about what I just wrote down in my notebook, I thought that I might have reached the bottom of the project; the sounding ground from which I can start building Serf safely.

That really feels good after so many philo- mathematical drama, and struggles around ideas like agents, communication, domain languages, representation, and son on. I guess something really stupid was, at last, blocking me conceptually; but attributes are just simpler children, and no distinction really exists.

So, yay, I just need to find how to put all those representational spaces together nicely (fortunately, they don’t have the same dimensions, so let’s figure out how they can group up) and go through ALL my heaps of notes to synthesize (part is already done, but lots of upgrades are required) then I’ll be on the clear to start coding \o/

(even better; I might have a project to build it along users feedback)

[edit 02/09] Gosh… when did I plan that post?! I’ll let it there as a memorandum but there are really no “threshold point” in this project. Is there a Zeno’s paradox regarding achieving something and getting a clear idea of what needs to be built?


Serf : Candid Manifesto in few Words

An interface that allows to exchange between two worlds;

its roots deeply ingrained in the reality of physical variables
its branches intelligibly structured in the reality of mental structures

An interface that could even go as far as maintaining the action of its user, functionally abstracting it, extracting the user intent, even organizing and referencing all the roots (sources and data wells) and branches (mental structures).

Une interface qui permet d’échanger entre deux mondes;

ses racines plantées profondément dans la réalité des variables physiques
ses branches structurées intelligiblement dans la réalité des structures mentales

Une interface qui irait peut-être même jusqu’à maintenir l’action de son utilisateur, abstraire fonctionnellement celle-ci, en extraire l’intention de l’utilisateur, voire organiser et référencer l’ensemble des racines (sources et puits de données) et des branches (structures mentales).

SERF, Thoughts

Programming from Data

A programming data-driven shouldn’t bother with format first

The user provides a bunch of bits at first, or let some in the memory be of the given input value

Then formats are added to it:
– Simple ones at first, that allow to build simple interpretations such as html,jpg,wav,…
– Growing more complex ones on top of them from more complex logics: tone, color, frequency, face,…

As the formats are stacking, the interpretation gets more details and more complex: the input tends to be fully resolved with infinite heterogeneous formats and infinite time

As formats are numerous, to describe a variety of things and their abstractions, it is helpful to count on a validity domain for each format. That way, automated interpretation can reject some of the formats. Applying a learning orchestration algorithm, structures between those formats can be inferred to identify subsets, similarities or, mainly, accelerate format testing; e.g. orchestrate formats in a bayesian tree according.

Each detected format is linking to a language, with its own set of rules, which will lead to an interpretation. Meaning a projection of the input node to a space constraints by those rules.
Besides correctness of the projection, there can be multiple valid projections in the same set of rules.
For instance, I can say: “The blue car of daddy is really quick” expressing the same fact as “Dad’s new car goes really fast”. It provides different subset of information, different words within the same language rules and conveying the same meaning (as per definition) linking to the same node, for the same format, but interpreting it in two different sentences (suite of expressions).

From these sentences, I get new nodes. By adding format rules regarding the natural language processing, I can get parsed expression from these; a graph of new nodes. And either play with words matching to infer interpretation structure, or develop on the sentence structure from the sentence root or… well, it’s lego time.

Also, even cooler, it gets pointers done right (as data should be contained in its node) and should focus on defining and structuring formats.

SERF, Thoughts

Lore vs Knowledge

An agent has an acquired knowledge associated with an expert language; this embeds silently what it assumes to be trivial or compressible up to not mentioning it in details.
Another agent is unfamiliar with the previous implicit communication structure and won’t therefore understand the first agent statements.

To fix this, the information needs to be described to the second agent. As the first agent knowledge is more or less distant from the second agent knowledge, some aspects need to be described in a more verbose way, through lower-level language expressions, and implicit knowledge have to be added and conveyed.

This is a tedious task, for the first agent, to:
– find the level of comprehension of the second agent
– review each high-level points that need to be mentioned
– detail those points to the second agent level of comprehension
– get to understand and adjust the non-expert questions
– figure out the suitable answers
– detail those answers to the second agent level of comprehension
– alter every step that needs to be updated

This could be handled by an interface; each agent requires a way that suits them to communicate. It needs to acknowledge for its agent knowledge, and to acquire the features required to communicate to other languages and levels of expertise.

I would even consider that, without the ability to translate what needs to be communicated between level of comprehensions, languages and representations; us, humans, wouldn’t be able to establish a communication more complex than any other creature.

Brief, SERF

Bye openBrainTer

I’m slowly, but surely, transferring the remains of openBrainTer to Serf.

Damn that trial project was 4 years ago; it feels like going through the basics again, as the idea was already there. An open framework to develop a brain for your computer; openBrainTer (I might suck at naming softwares).  Except I now have a much deeper idea of what needs to be done.

I always skipped the FXML implementation part in openBrainTer, not willing to rethink everything within new standards. As I’m learning back JavaFX, I started without even considering doing otherwise than FXML. That’s a neat way to split your project in MVC, and I like to be able to add a css layer on top of it ^^

If you’ve never tried JavaFX before; just go for it! It gives you a lot of power and freedom over your UI; it makes your application shinny and helps you do better design for better UX.

SERF, Thoughts

The Butterfly & The Arrow

Damn I really have a hard time finishing large blog posts, but I’m creating this new one in parallel just to keep a trace of an insightful idea.

A bit of Context

I was watching the unaffordable foldable phablets and was surprized to see the Huawei Mate X had slightly better features and design than the Samsung Galaxy Fold.
Historically, the latter was the innovation leader, especially regarding hardware, while the former used to be noticeable by a good cost-quality ratio and quality softwares to compensate the hardware limitations.
After trying to reach the leader, they finally passed him.

In the same sense, that’s why startup don’t want to go public too early. If execution is still ongoing, a larger company could focus on reaching the same goals, with a different approach, and propose a better service; as they’re organized, efficient, knowledgeable, while startups are still trying to figure out what they’re doing.

But what does it show is that Samsung had the same behavior. As the leader, having Apple in the back copying every device with delay (even buying the components from Samsung), they didn’t had incentive to push towards novelties and kept looking around for ideas, prototypes and profits, while being risk savvy and not assertive for execution.
In the mean time, Huawei had its focus not on finding its way but on, first, reaching the lead level (which can be considered recently accomplished when you compare the Mate 20 to the S9) and then beating him on its own innovation; which they seem to have done on this first wave of the new generation of foldable phones.

The Duality in Companies

I really see 2 behaviors here: one was focus on finding the right way to execute a will, the one of bringing an innovative generation of smartphones, fluttering around ideas, analysis and decisions to make. This means the goals have to be determined from the will; I call this the Butterfly behavior.

While the other already had the goals in mind and focused its energy on execution. If your employees are specialized and focused towards well-established goals; all your forces are going efficiently towards the same target. While fluttering around means you might be more inclined to switch directions and find a better target, but you’re consuming resources in prospecting instead of producing. Though no company, either established worldwide or in a garage, can fully behave in one way.

Every company needs to find a balanced behavior between The Arrow and The Butterfly. Every company needs to both prospect and execute; the remaining is management. Should we foresee big plans? Should we improve design? Should we produce more? Should we increase investment in R&D? etc.

Using The Butterfly & The Arrow as a Tool

The behavior of a company, is a balance between the Butterfly and the Arrow behaviors. How to represent it? How to get a feeling of this?

Setting the Analogy Scene

Let’s say we have an infinitely long rolling sheet of paper. On top of this paper, you put a pen. If the pen stays still, it has no behavior. On the sheet, this is represented by a flat line; the paper keeps rolling but the pen stays static so a horizontal line is drawn.

This gives us the time as a horizontal axis; then we’ll suppose the vertical axis is another metric, like how well a company is doing at reaching its goals at a given time.
If it stays still, it doesn’t do good at reaching its goals, if it goes down it’s doing worse than nothing and, if it goes up, it gets closer to it.

That’s a simple plan defining a continuous function that serves fitting purpose (but with a pen and an infinite roll of paper, don’t think about bringing deep neural nets yet!).

The Arrow Behavior

There it defines its goals by pointing at… points in the future of this rolling paper sheet. And, as close it gets, as good it made it. It also means that a goal is to be at a defined value at a given time; so the pen is a micro-agent.

We could extrapolate to an agent if we had multiple states the system should reach as a goal, while using complex behavior in complex spaces and blabablah. We’ll keep it simple: a pen evolves on a rolling sheet of paper and sets its goals to reach points.

From its position, defined by its vertical coordinate at a given horizontal coordinate that keeps incrementing,  to its goal point, we can define a vector translating the pen to fill its objective. This embodies the concept of The Arrow behavior.

In a perfect world, where goals are well-established and reachable, this is it. The pen just has to follow the vector and gets to its point, the management just needs to push the team to deliver it, the spaceship just has to accelerate to reach light speed, etc.

In a true world, if the pen needs to reach a coordinate to, let’s say, 3 vertical-km higher and 1 sec away, where 1 sec = 1  horizontal-meter; then the pen needs to get to a point so far in such a short time that it won’t. Not because it’s breaking relativity, it doesn’t fall short, but because the paper will burn and the pen will break before it reaches its goal.

The Missing Behavior

What can we say besides; That was a stupid goal ?

There’s no magic potion, no healing the wounds, no alternative path. Trying to launch a pen at 11.000 km/h over 3 km on a paper translating at 0.001 km/h; that just doesn’t make sense as a goal. No company should try to confront a GAFAM company while being 2 guys in a garage; that’s not a realistic step, and none sensible would try this.

Does it mean we have to limit the goals? Should the vector only points to reachable goals? What kind of goals are reachable ?
Then we start implementing goals classification at a given time, establish constraints and reasonable delays; we grow on knowing how to behave regarding certain goals and disregard what is not reachable.

Though this is a complex social construct that requires time, resources and knowledge gathering via trials & errors. What if I’m just a disorganized startup? What if I’m an explorer discovering a new land? What if I’m just an ant trying to behave in a unlabeled environment?

Although it doesn’t require to be a well-organized expert to be certain when you claim “I cannot build a spaceship”. People have naturally more sense than what a goal-oriented vector would do with or without complex expertise.

The Butterfly Behavior

If our pen is expressing a metric, it should convey the will for the Arrow behavior, but we don’t expect it to have extreme goals. This has to be tempered by reality and capabilities, in order for the pen to become a metric of real values.

In reality, companies will have to work around their problems, have good or bad surprises in their results, determine their image, get resources delay, rework their goals in a lean fashion, etc. They really don’t go straight to their goal, though they point at it, but they might change it; in density, in direction, in multiplicity,… The Butterfly behavior acts upon the Arrow behavior it also inflects its goal in such a way that they have a symbiotic coexistence.

The Butterfly expresses the fluttering as the pen oscillating around its path due to noise, delays, constraints, research of better path, etc.
By making the Arrow varies (in intensity, direction,…), the expression of both is setting the goals. And what is really reached, from what is expected to be reached, is the product of both behaviors.

You can therefore think of the pen point as having 2 behaviors attached to it; an Arrow that is willing to reach its goals asap, and a Butterfly, that adjust the Arrow behavior while propelling it.

I personally pictures it as a pen trace in a plane with the Arrow pointing at a location and the Butterfly introducing nondeterministic noise.

  • With no behavior, the pen is still at its origin
  • With Butterfly, the pen flutters around its origin
  • With Arrow, the pen gets to target location
  • With both in the same amount, the pen gets to target location in random path

But the rolling paper helps gets things moving (in a plane, everything is static) so the make seeking goals more relevant, and as to already project myself in the Agent modeling I’m developing for my current project.

Extension to Other Domains

Then, what hits me, is how frequently this pattern is found.

In Humanity

In investment, you have the Bull & the Bear similarly; when profits are easy on some domains (let’s say companies, or crypto-currencies) everyone is already willing to buy it as it makes a good deal. Plenty of growing interesting opportunities; the will of the investors is there and they rush towards the goal. It’s a Bull market, it’s an Arrow behavior.

As the market becomes less profitable, subjective risk increases and people limit themselves and carry long-term investments; they are more hesitant about what they do with their capital and try to find new safe opportunities. This is a Bear market, or a Butterfly behavior. It changes once the new profitable opportunities are found, back to an aggressive behavior towards the new target.

We can also see more fundamental behavior in it; just like human introverted/extroverted behavior switch. As the context becomes clearer, the situation is safer, the goal is better established, the people around are getting known,… a human will move from introverted to extroverted. As everything is clearer (just like the dark; not clear is not safe), a person can start affirming his goals and know which can be filtered or emphasized (e.g. finding things in common).

In Nature

This pattern can also be applied successfully in nature. If I take a simple case; the Sun is going through the Universe towards one of its lost corners. It has an Arrow behavior towards that place and, around it, many bodies are orbiting, like the Earth. Forcing the path of the Sun to be shaky through space and time. This is similar to our pen not getting straight to the target point.

The exact mechanisms are, of course, quite different from our pen as the context is also. But there’s noise and inflected trajectory though this might be harder to picture. If you take the Moon and the Earth, then the Arrow behavior is much less significant compared to the Butterfly behavior; the Moon acts upon seas and bend the Earth orbit.
But to see really in large scale interactions where the Butterfly behavior is significant, you might want to live next to a binary star system.

In a smaller scale, you can see the exact opposite is happening: it becomes common for the Butterfly behavior to be order of magnitudes more affirmed than the Arrow behavior.

A simple example, as everyone had seen its planetary model, are atoms. You have the kernel at the center, it is the essential of the mass so it determines the speed vector; and, therefore, the Arrow behavior as traveling will be the metric.
But it doesn’t travel straight, because it is surrounded by electrons that react to another type of cause (the electromagnetic field is not our metric) and our kernel reacts to its electrons movement.
Actually, the electrons are so strong that they are, at scale, waaaaaay further from the atom’s kernel than the Earth is from the Sun. So the Butterfly is much more important than the Arrow, in this example. In this case, erratic movements around the same place will be proportional to the orbitals states; as it gets more excited, it gets more “turbulent”.

In the Limits of Our Knowledge ?

At a larger scale, for molecules, we have the Brownian motion. But, at smaller scale, I’d like to refer to a concept I had quite an interest in by the past: the pilot wave theory.
In this approach, quantum weirdness is interpreted by a duality similar to the Butterfly & Arrow behavior, which is why I took the time to write this down (I kinda found half the examples on the way; pretty sure there are plenty).

I won’t trick you in telling you the following is an incredible theory unfairly ignored, unlike some others do, but it’s a promising approach from some to reinterpret an elegant theory, as old as Quantum Mechanics, but in a modern valid way. Because mathematical and physical theories are just like software products; they need to grow and validate new and stronger constraints.

There already exist quite a lot of great resources on the web. I’ve been storing papers that I can barely read for years (some with baiting names, damn!) but, to get you a stronger insight than my summary below; here’s some videos vulgarized videos from physicists :


In my non-physicist vulgarized way;
Think of electrons as 4D particles bouncing up and down on the thin sheet of our 3D world. What determines how they will bounce is the wave under them, produced by the sheet wrinkling under the directed bouncing. Both the waves on the sheets and the bouncing particle are adjusting each other during movement.

From bouncing against obstacles, the wave already knows the ideal path for the particle to translate to. From bouncing on the electromagnetic field, the electron maintains the wave. This is an effect that can be mimicked at larger scale (using a high-power speaker and non-Newtonian fluids, for instance)

In Conclusion

What I find really interesting by thinking about this Butterfly & Arrow dual behavior, is that this pattern seems to be quite everywhere though it’s quite an abstract one. So it’s interesting to name and point to it to better determine and observe it.

Maybe there’s some truth behind? Maybe an intelligent system should start by having both behaviors and the capacity to orchestrate their alternate use… Maybe that’s what could better mimic real agents?
Maybe this pattern has a deeper truth that I cannot grasp yet…

Brain Farming, SERF, Thoughts

What’s up for 2019 ?

Tamagoso – an idea that might be refined later

As mentionned in the proposition article, it’d be nice to have it developed as the dependencies of a complex agent. I presented previously the ideas of botchi and the serf layer, which have different but compatible purposes: one to make some AGI-compliant (yeah, I know, the G is too ambitious) UI for an agent, the other is way to process data mimicking what you’ll expect from a turing machines for automaton. I mixed those two ideas at some points into Tamagoso. From “Tamago” meaning egg in japanese, and an obvious reference to 90’s game; and from “so” meaning layer, though there’s an union on the o and I’m not sure of the prununciation)

The idea is then to use an UI based on Tamagotchi design to monitor the agent health, as well as instructing it from different media sources and to apply different set of rules in various contexts. A growable, pluggable, user friendly machine that could be developed up to competing with each others? Couldn’t we think of bot battles on mathematical proofs if given a mathematical reasoning corpus? Or in a street fighter way if given a game environment ? The idea is still the same; evolving high-level agents for multitasking, but in a community friendly way. Reason behind is my belief that the world is hold by, not one, but multiple truths: some reasoning more or less with the populations. Behind the established facts, everyone has a cohesive approach to their interpretations and none can provide all the keys to interpret those. So should the approach be plural.

That’s why this project might be put along the platform proposition. I think we’re still lacking a lot of tools to get there, and a proper way to handle trained neural nets as standard modules is one of them.

Is AGI Architecture a thing ?

I got to make sense of Solomonoff’s induction recently and, while having no wonder for the thing, I ended up with this wonderful interview of Marvin Minski on the top-down approach to AGI after consideration of this induction principle.
And that still resonate in my mind as I ended up considering, a long time ago, that we were more a middle layer from which both top-down and bottom-up approaches were doomed to building further away from the essence of intelligence, instead of dissecting it.


A top-down approach, if it’s truly feasible, would be the world of architects specialized in AI. As my craziest dream is to become an AGI architect, developing the AI architecture seems to be the right path.

So could we really take an AGI down to a machine capable of finding the smallest turing machine that fits a task? I’m unsure it’s enough; especially as we know perception is relative to evolutions, and some human-based parametrization will be required for AGI. Though that’s probably more describing the limit of what AI can make: this induction determines the length of the simplest way to get a task done. That’s not human clumsiness and risky shortcuts to get to a “good-enough” result.

So AGI architecture still needs better requirements, but AI architecture could emerge.
Also, I see here, here and there transfer learning trending. Maybe there’s a sign architecture of embedded knowledge is gonna be the next step ?

Platform Embedded Spaces – First use case ?

I would love to start with Tamagoso, but it’s so incredibly freaking difficult and I’m still looking at the mathematical spaces and operators, as well as the concrete cases, we could get out of that. Still documenting, still thinking, I need a simpler case to start something that ambitious progressively. I got a lean canvas, a quick look at the technologies, and that’s it; I’m lost and don’t even know where to start my UMLs.

So maybe a case where I can get more into a trial and error approach would be best. I was thinking about some really 101 agent that can have a nice purpose and it seems we could do something of educational value:
A student might miss some parts of the courses and, while it is unclear when teachers are still discussing notions, it snowballs quickly with advanced tasks depending of those missed fundamentals. A better way to detect it would be to seek for patterns in each school student exercises by casting their mistakes from different source materials to different knowledge spaces.
A hierarchy and classification of those spaces can help to create a reading canvas of the overall student performance. From higher generalization features space, we can retrieve a general student pattern over multiple high-level skills, and get down to the more precise weaknesses. That way we can prevent the student from getting into a snowball failure later from missing notions initially, instead of leaving it undetected until the problem gets raised and it’s too late.

This could therefore be a pedagogic tool, but finding dataset and study cases won’t be easy.