SERF, Thoughts

An Inquiry Into Cognitive System Approach

I recently heard from my director that the technologies and architecture patterns I was pushing for in new projects were too cutting-edge for our practices and our market.
As I tried to illustrate, through past projects, how such technology or such way of designing could have avoided us unnecessary complexity: to implement, to modify and to maintain; I just had an evasive reply.

Truth is; he was right in his position and so was I.
As a business thinker, he was rationally thinking: if I’m not educated in that technology, how can I convince a client unaware of it that it is a good decision to start a whole delivery process for it?
As a technology thinker, I was rationally thinking; why don’t we give more efforts into getting the client problem from the right perspective and apply the right solutions for them, so we avoid expensive delays and can functionally scale?

***

A simple case would be to define movements in a closed room. You can define them by referencing every place there is to reach, like a corner, the sit, the observer location, the coffee machine or the door. This is easy to list but doesn’t give you much granularity. As complexity will arise, it will be hard to maintain.

Let’s say I want to be close to the coffee machine, but not enough to reach it. Or I want to reach the observer from different angles. I might want to get away from the door location when I open it. And many new cases I can think of to better match reality possibilities.
What was a simple set of discrete locations becomes complex cases to define. As it grows larger and more entangled, complex documentation will be required to maintain the system. Extending or updating it will be a painful and expensive process.

But video games solved that issue a long time ago; transiting from the interaction capacity of point-and-click games, or location-based static 3D cameras, where only discrete views and discrete interactions were available; to composite spaces assembled by collision detection and rule-based events. It’s a way to automate a level of granularity that is beyond human computability.

Think of the squares on the room floor. What if I define them to be the set of discrete locations; preferring mathematically defined locations instead of manually defined locations.
What could you say about my previous locations that are human-readable, like being able to reach the coffee machine? Both in human assumption and video games; it is a question of radius around the detected item. The spatial approach gives our set a relation of ordering over the set; that relation allows us to define the concept of radius as a relation from our domain set to a subset of “close-enough-to-the-coffee-machine” locations.

***

The other good part with this approach is that we don’t need to formalize its target subset; as long as we have a consistent relation, we can define if it eventually is, or not, in reach; and with the degree of certainty we want to apply if it’s an iterative function. We don’t need a long computation to formalize everything: the capacity of doing so, with much or less computing efforts, is good enough to give a definition.

Why would I say that? Because of precision.

As we started to mathematically define our room locations, we didn’t formalize how far were we going with that and some uncertainty remains. Should squares be large enough to encompass a human? Or small enough to detect its feet locations? Maybe down to little pixels able to see the shapes of his shoes? With great precision?

This is granularity, and it should be as granular as the information you’re interested by, because granularity has cost and complexity associated with.

From our various square sizes, we have the idea that the invariant is: there are 8 available locations from any location unconstrained by collision. So it seems the computational complexity stays the same. But don’t be a Zeno of Elea too quickly, it is impacted.

The first impact is the real move is indifferent from the scale of measurement.
The second impact is the chosen scale can be non-relevant to the observed phenomenon.

Going from a location A to a location B won’t change the actor behavior but the number of measured values will explode, ergo the computational power and the space complexity to track the actor. If you decrease the size of the squares, you get more accurate measurement but much more data to process. If your squares size increases, you’re not sure anymore whether the person can reach the coffee machine from its location. You need to go as far as information is not noise yet.
At large, it’s a matter of finding the optimal contrast at each scale of the information.

***

What we have seen here with the room floor are 3 level of scales:

  • with raw discrete locations, more on a human-functionally readable level, but limited like point-and-click games, or preset camera locations.
  • Then a first granularity that could help us detect roughly locations, think of it as a 2D RPG old-style.
    On it we can start applying spatial logic, like a radius, to know which locations are in range.
  • Finally a thin granularity level that allowed us to go as detailed as to describe the footprint of the actors in the room.
    That level of granularity is more common for 3D texturing in modern video games or machine-engineered electronic devices such as touchpad or display screens. Every sensor, every pixel, has its transistor (up to 4 in a colored pixel).
    When you get down to that level, either everything is identically formalized or you are in big troubles.

My point being; the problems of the markets haven’t changed and probably won’t. We’re starting to make mainstream the technologies to deliver a transition from point-and-click to 2D RPG. In some specific cases, we can even start reaching for thin granularity.

We can foresee that low level of granularity will benefit high-granularity technologies.
But the shift of business perspective; regarding how problem can be solved with more data, more granularity, better decoupled technical and functional levels of interpretation, and so on; is not done yet! Although it’s just a shift of perspective away from us to pivot to this low granularity way of thinking our old problems and old projects; our approach where we claim unfitting architecture pattern, process and a global lack of granularity and decoupling .

There are opportunities to unravel and technologies to emerge as we will move towards such a transition in our traditional systems. We will have to work on more complex mathematical formalization and automation to unleash a wider realm of implementable functionalities.
We can deliver better functionalities that we will use as the public tomorrow. We just need to find this more granular way in our projects, where we could build higher added value.

The tools are there, the first ones to shift their perspective will take the lead.
Besides, the fun is to set up granularity, not to improve it.