Brief, Thoughts

The Curious Task of Computation

There are really 2 shapes of reality:

The physical world, with its realm of mysterious relations and this weird capacity to be the same for all of us, and the sensory world, which is a really intimate experience extremely difficult to describe (especially if you don’t share much DNA with us Humans).

As we’re not that good to make something objective out of the physical world with our subjective senses, we elaborated Physics; as a way to describe the physical world based on how it act upon itself. Meaning we can only correlate physical events with other physical events; building models made of wavelengths and energies. Things that have no meaning to our sensory world.

From the second one, we elaborated psychology; as a way to describe the sensory world based on other sensing agents. Meaning we try to correlate ideas based on interpreted experiences and derive models. Similarly, it’s the sensory world acting upon itself, but with the extra difficulty of accounting for a variety of opaque sensory worlds. Even if the architecture is genetically similar, we cannot see what’s inside or understand it from the physical world perspective.

And, in-between, as a way to match those really odds worlds, there’s the curious task computation. This domain is a way to match bidirectionally physical events with sensory events; to have the information from the physical bit to an intelligible idea or function, then back to the bit.

Giving this, my point is a question you should ask yourself:
Are we really filling the objective we gave ourselves?

 

Addendum: this is inspired, but different, than Hayek’s analysis in “The Sensory Order”. In his approach, he talks about “orders” instead of “worlds” used in this article.
Comparatively, in Serf Layer, the “physical world” is “perception” and the “sensory world” is “interpretation”. Which I believe is much more intuitive to expose them functionally. But Hayek wrote this in a much different era where computers were still a post-war secret and the Tractatus Logico-Philosophicus was a recent book.

SERF, Thoughts

An Inquiry Into Cognitive System Approach

I recently heard from my director that the technologies and architecture patterns I was pushing for in new projects were too cutting-edge for our practices and our market.
As I tried to illustrate, through past projects, how such technology or such way of designing could have avoided us unnecessary complexity: to implement, to modify and to maintain; I just had an evasive reply.

Truth is; he was right in his position and so was I.
As a business thinker, he was rationally thinking: if I’m not educated in that technology, how can I convince a client unaware of it that it is a good decision to start a whole delivery process for it?
As a technology thinker, I was rationally thinking; why don’t we give more efforts into getting the client problem from the right perspective and apply the right solutions for them, so we avoid expensive delays and can functionally scale?

***

A simple case would be to define movements in a closed room. You can define them by referencing every place there is to reach, like a corner, the sit, the observer location, the coffee machine or the door. This is easy to list but doesn’t give you much granularity. As complexity will arise, it will be hard to maintain.

Let’s say I want to be close to the coffee machine, but not enough to reach it. Or I want to reach the observer from different angles. I might want to get away from the door location when I open it. And many new cases I can think of to better match reality possibilities.
What was a simple set of discrete locations becomes complex cases to define. As it grows larger and more entangled, complex documentation will be required to maintain the system. Extending or updating it will be a painful and expensive process.

But video games solved that issue a long time ago; transiting from the interaction capacity of point-and-click games, or location-based static 3D cameras, where only discrete views and discrete interactions were available; to composite spaces assembled by collision detection and rule-based events. It’s a way to automate a level of granularity that is beyond human computability.

Think of the squares on the room floor. What if I define them to be the set of discrete locations; preferring mathematically defined locations instead of manually defined locations.
What could you say about my previous locations that are human-readable, like being able to reach the coffee machine? Both in human assumption and video games; it is a question of radius around the detected item. The spatial approach gives our set a relation of ordering over the set; that relation allows us to define the concept of radius as a relation from our domain set to a subset of “close-enough-to-the-coffee-machine” locations.

***

The other good part with this approach is that we don’t need to formalize its target subset; as long as we have a consistent relation, we can define if it eventually is, or not, in reach; and with the degree of certainty we want to apply if it’s an iterative function. We don’t need a long computation to formalize everything: the capacity of doing so, with much or less computing efforts, is good enough to give a definition.

Why would I say that? Because of precision.

As we started to mathematically define our room locations, we didn’t formalize how far were we going with that and some uncertainty remains. Should squares be large enough to encompass a human? Or small enough to detect its feet locations? Maybe down to little pixels able to see the shapes of his shoes? With great precision?

This is granularity, and it should be as granular as the information you’re interested by, because granularity has cost and complexity associated with.

From our various square sizes, we have the idea that the invariant is: there are 8 available locations from any location unconstrained by collision. So it seems the computational complexity stays the same. But don’t be a Zeno of Elea too quickly, it is impacted.

The first impact is the real move is indifferent from the scale of measurement.
The second impact is the chosen scale can be non-relevant to the observed phenomenon.

Going from a location A to a location B won’t change the actor behavior but the number of measured values will explode, ergo the computational power and the space complexity to track the actor. If you decrease the size of the squares, you get more accurate measurement but much more data to process. If your squares size increases, you’re not sure anymore whether the person can reach the coffee machine from its location. You need to go as far as information is not noise yet.
At large, it’s a matter of finding the optimal contrast at each scale of the information.

***

What we have seen here with the room floor are 3 level of scales:

  • with raw discrete locations, more on a human-functionally readable level, but limited like point-and-click games, or preset camera locations.
  • Then a first granularity that could help us detect roughly locations, think of it as a 2D RPG old-style.
    On it we can start applying spatial logic, like a radius, to know which locations are in range.
  • Finally a thin granularity level that allowed us to go as detailed as to describe the footprint of the actors in the room.
    That level of granularity is more common for 3D texturing in modern video games or machine-engineered electronic devices such as touchpad or display screens. Every sensor, every pixel, has its transistor (up to 4 in a colored pixel).
    When you get down to that level, either everything is identically formalized or you are in big troubles.

My point being; the problems of the markets haven’t changed and probably won’t. We’re starting to make mainstream the technologies to deliver a transition from point-and-click to 2D RPG. In some specific cases, we can even start reaching for thin granularity.

We can foresee that low level of granularity will benefit high-granularity technologies.
But the shift of business perspective; regarding how problem can be solved with more data, more granularity, better decoupled technical and functional levels of interpretation, and so on; is not done yet! Although it’s just a shift of perspective away from us to pivot to this low granularity way of thinking our old problems and old projects; our approach where we claim unfitting architecture pattern, process and a global lack of granularity and decoupling .

There are opportunities to unravel and technologies to emerge as we will move towards such a transition in our traditional systems. We will have to work on more complex mathematical formalization and automation to unleash a wider realm of implementable functionalities.
We can deliver better functionalities that we will use as the public tomorrow. We just need to find this more granular way in our projects, where we could build higher added value.

The tools are there, the first ones to shift their perspective will take the lead.
Besides, the fun is to set up granularity, not to improve it.

SERF

The Day I was Over Optimistic …

Ok, this title might be really hard to get if you don’t have my notes in front.

Though, this Sunday 30th June around 5pm, as I was thinking about what I just wrote down in my notebook, I thought that I might have reached the bottom of the project; the sounding ground from which I can start building Serf safely.

That really feels good after so many philo- mathematical drama, and struggles around ideas like agents, communication, domain languages, representation, and son on. I guess something really stupid was, at last, blocking me conceptually; but attributes are just simpler children, and no distinction really exists.

So, yay, I just need to find how to put all those representational spaces together nicely (fortunately, they don’t have the same dimensions, so let’s figure out how they can group up) and go through ALL my heaps of notes to synthesize (part is already done, but lots of upgrades are required) then I’ll be on the clear to start coding \o/

(even better; I might have a project to build it along users feedback)

[edit 02/09] Gosh… when did I plan that post?! I’ll let it there as a memorandum but there are really no “threshold point” in this project. Is there a Zeno’s paradox regarding achieving something and getting a clear idea of what needs to be built?

Thoughts

You CANNOT Define Semantic

I couldn’t find a way to approach a definition of the semantic of a sentence.

It is like the holes in cheese; it can’t be fully described by its essence and, even if the cheese is not important to what’s inside the holes, you lose your holes if you lose your cheese.

I couldn’t define Semantic properly, but I could find a lot of “iso-“semantic examples. I mean “you can change the sentence but not the meaning” cases. It can be actually done easily and at different level of abstraction:

  • Alternatives spelling and grammars (character level)
  • Synonyms (word level)
  • Sentence and expressions in a context (sentence level)
  • A more global context (that is blurry for me there)
  • Character nuance (you can stylize character without altering the semantic)

So it tells you that different sentences can convey the same meaning. There you get that defining semantic is like removing the cheese to better see the holes. Though, you need to finely cut around; otherwise you get nothing to talk about.

[edit 2 weeks later] Actually, you somehow could define semantic; but you need to do it on top of some mechanism or machine, in a transitive way. Like, for instance, you can define a programming language to encompass the syntax of some commands. Then you need to extract the Abstract Syntax Tree and, finally, match the tree structure with the commands of a machine that fills in those commands (let’s say a Petri net, because those are great, but there are much more modern and complex alternatives). That way, you extracted the semantic of a programming code, but the semantic is bounded to the underlying machine domain. Meaning your machine has the same limitations as a trained neural nets: it only interprets sentences according to what states it has registered in it. The other way out would be through combinatorics?

So…. that question is surely not a case closed with such a small blog post. The question might be stated more like “Could we define a non-projective semantic ?” (meaning; a semantic formalization that exists independently from any implementation, system or machine)
As this blog target is the pair human brain/natural language, this question might be able to tell us if we can simulate a machine that understands like humans without having the need to simulate a human brain and human education.

 

[edit 4 weeks later] Seeing the absence of reactions regarding this post, I might have got something wrong or misunderstood

Thoughts

The Consensus over Singularity

It is said that, the day a computer scientist will understand life, a simulator to check the results will be on the way before the first coffee break, the results will be confirmed after lunch, he will then simulate all conditions and, by the end of the afternoon, will have developed an algorithm that search efficiently the conditions tree for life.
In the evening, he’ll retire to his home and consider life solved.

SERF

Serf : Candid Manifesto in few Words

An interface that allows to exchange between two worlds;

its roots deeply ingrained in the reality of physical variables
its branches intelligibly structured in the reality of mental structures

An interface that could even go as far as maintaining the action of its user, functionally abstracting it, extracting the user intent, even organizing and referencing all the roots (sources and data wells) and branches (mental structures).


Une interface qui permet d’échanger entre deux mondes;

ses racines plantées profondément dans la réalité des variables physiques
ses branches structurées intelligiblement dans la réalité des structures mentales

Une interface qui irait peut-être même jusqu’à maintenir l’action de son utilisateur, abstraire fonctionnellement celle-ci, en extraire l’intention de l’utilisateur, voire organiser et référencer l’ensemble des racines (sources et puits de données) et des branches (structures mentales).

SERF, Thoughts

Programming from Data

A programming data-driven shouldn’t bother with format first

The user provides a bunch of bits at first, or let some in the memory be of the given input value

Then formats are added to it:
– Simple ones at first, that allow to build simple interpretations such as html,jpg,wav,…
– Growing more complex ones on top of them from more complex logics: tone, color, frequency, face,…

As the formats are stacking, the interpretation gets more details and more complex: the input tends to be fully resolved with infinite heterogeneous formats and infinite time

As formats are numerous, to describe a variety of things and their abstractions, it is helpful to count on a validity domain for each format. That way, automated interpretation can reject some of the formats. Applying a learning orchestration algorithm, structures between those formats can be inferred to identify subsets, similarities or, mainly, accelerate format testing; e.g. orchestrate formats in a bayesian tree according.

Each detected format is linking to a language, with its own set of rules, which will lead to an interpretation. Meaning a projection of the input node to a space constraints by those rules.
Besides correctness of the projection, there can be multiple valid projections in the same set of rules.
For instance, I can say: “The blue car of daddy is really quick” expressing the same fact as “Dad’s new car goes really fast”. It provides different subset of information, different words within the same language rules and conveying the same meaning (as per definition) linking to the same node, for the same format, but interpreting it in two different sentences (suite of expressions).

From these sentences, I get new nodes. By adding format rules regarding the natural language processing, I can get parsed expression from these; a graph of new nodes. And either play with words matching to infer interpretation structure, or develop on the sentence structure from the sentence root or… well, it’s lego time.

Also, even cooler, it gets pointers done right (as data should be contained in its node) and should focus on defining and structuring formats.

SERF, Thoughts

Lore vs Knowledge

An agent has an acquired knowledge associated with an expert language; this embeds silently what it assumes to be trivial or compressible up to not mentioning it in details.
Another agent is unfamiliar with the previous implicit communication structure and won’t therefore understand the first agent statements.

To fix this, the information needs to be described to the second agent. As the first agent knowledge is more or less distant from the second agent knowledge, some aspects need to be described in a more verbose way, through lower-level language expressions, and implicit knowledge have to be added and conveyed.

This is a tedious task, for the first agent, to:
– find the level of comprehension of the second agent
– review each high-level points that need to be mentioned
– detail those points to the second agent level of comprehension
– get to understand and adjust the non-expert questions
– figure out the suitable answers
– detail those answers to the second agent level of comprehension
– alter every step that needs to be updated

This could be handled by an interface; each agent requires a way that suits them to communicate. It needs to acknowledge for its agent knowledge, and to acquire the features required to communicate to other languages and levels of expertise.

I would even consider that, without the ability to translate what needs to be communicated between level of comprehensions, languages and representations; us, humans, wouldn’t be able to establish a communication more complex than any other creature.

Brief, Thoughts

The curious crusade of today

It seems today that social networks are the vehicles to a complete nihilistic approach of the information that expands because it can only do so. Here’s my reasoning;

In 2019, everything that is neither too violent or pornographic can, and probably will, be seen. The “rule 34” is even a way to refer to that idea of content explosion. Which is awesome, considering Humanity always had limited communication as a well-established fact.

Then you have the special case of what is “shocking”. If there’s a strong consensus over violence and porn, though the threshold differs from an individual to another, the consensus fall a part when points of view differ.
Atheists are more tolerant to blasphemies, nudists are more tolerant to nakedness, war victims are less tolerant to violence, etc. According to who we are, we don’t expect the same content in the same way; some “alarms” might be triggered more easily.

Except that it’s a case of the past.
What gets to buzz is not really what is the most enjoyable, but what creates debates and polarization. But, when the controversy is about showing something or not, the ban camp already lost. As the polemic will grow, the interest in the information that needed to be hidden is now buzzing (the Streisand effect).

Finally, you cannot fight for censorship in this new connected age; it just makes it worst.
Then, does it mean that no sensitivity should be spared?
Some people can be really hurt by opinions aimed at their believes or communities, but they will have to avoid what shocks them and hold their griefs, to avoid igniting the social networks. Whatever are you true believes, censorship worldwide seems good to decline until disappearance, because it simply became obsolete.

But, then, nihilism will become the apparent norm; as the only way to communicate. Kids will grow in constant nihilism, some becoming culturally passionless and extremely tolerant to content. Is that a good thing ?

Eventually, politically correct will shrink our domains of communication, and it will sort itself by having no matter for communication?