The Consensus over Singularity

It is said that, the day a computer scientist will understand life, a simulator to check the results will be on the way before the first coffee break, the results will be confirmed after lunch, he will then simulate all conditions and, by the end of the afternoon, will have developed an algorithm that search efficiently the conditions tree for life.
In the evening, he’ll retire to his home and consider life solved.


Serf : Candid Manifesto in few Words

An interface that allows to exchange between two worlds;

its roots deeply ingrained in the reality of physical variables
its branches intelligibly structured in the reality of mental structures

An interface that could even go as far as maintaining the action of its user, functionally abstracting it, extracting the user intent, even organizing and referencing all the roots (sources and data wells) and branches (mental structures).

Une interface qui permet d’échanger entre deux mondes;

ses racines plantées profondément dans la réalité des variables physiques
ses branches structurées intelligiblement dans la réalité des structures mentales

Une interface qui irait peut-être même jusqu’à maintenir l’action de son utilisateur, abstraire fonctionnellement celle-ci, en extraire l’intention de l’utilisateur, voire organiser et référencer l’ensemble des racines (sources et puits de données) et des branches (structures mentales).

SERF, Thoughts

Programming from Data

A programming data-driven shouldn’t bother with format first

The user provides a bunch of bits at first, or let some in the memory be of the given input value

Then formats are added to it:
– Simple ones at first, that allow to build simple interpretations such as html,jpg,wav,…
– Growing more complex ones on top of them from more complex logics: tone, color, frequency, face,…

As the formats are stacking, the interpretation gets more details and more complex: the input tends to be fully resolved with infinite heterogeneous formats and infinite time

As formats are numerous, to describe a variety of things and their abstractions, it is helpful to count on a validity domain for each format. That way, automated interpretation can reject some of the formats. Applying a learning orchestration algorithm, structures between those formats can be inferred to identify subsets, similarities or, mainly, accelerate format testing; e.g. orchestrate formats in a bayesian tree according.

Each detected format is linking to a language, with its own set of rules, which will lead to an interpretation. Meaning a projection of the input node to a space constraints by those rules.
Besides correctness of the projection, there can be multiple valid projections in the same set of rules.
For instance, I can say: “The blue car of daddy is really quick” expressing the same fact as “Dad’s new car goes really fast”. It provides different subset of information, different words within the same language rules and conveying the same meaning (as per definition) linking to the same node, for the same format, but interpreting it in two different sentences (suite of expressions).

From these sentences, I get new nodes. By adding format rules regarding the natural language processing, I can get parsed expression from these; a graph of new nodes. And either play with words matching to infer interpretation structure, or develop on the sentence structure from the sentence root or… well, it’s lego time.

Also, even cooler, it gets pointers done right (as data should be contained in its node) and should focus on defining and structuring formats.

SERF, Thoughts

Lore vs Knowledge

An agent has an acquired knowledge associated with an expert language; this embeds silently what it assumes to be trivial or compressible up to not mentioning it in details.
Another agent is unfamiliar with the previous implicit communication structure and won’t therefore understand the first agent statements.

To fix this, the information needs to be described to the second agent. As the first agent knowledge is more or less distant from the second agent knowledge, some aspects need to be described in a more verbose way, through lower-level language expressions, and implicit knowledge have to be added and conveyed.

This is a tedious task, for the first agent, to:
– find the level of comprehension of the second agent
– review each high-level points that need to be mentioned
– detail those points to the second agent level of comprehension
– get to understand and adjust the non-expert questions
– figure out the suitable answers
– detail those answers to the second agent level of comprehension
– alter every step that needs to be updated

This could be handled by an interface; each agent requires a way that suits them to communicate. It needs to acknowledge for its agent knowledge, and to acquire the features required to communicate to other languages and levels of expertise.

I would even consider that, without the ability to translate what needs to be communicated between level of comprehensions, languages and representations; us, humans, wouldn’t be able to establish a communication more complex than any other creature.