Brief, SERF

Dead Man’s Switch

Hi everyone, long time no see! So here it’s time to debrief a bit about what I was up to and what about the other blog threads I let on pause for a while.

First, I’m really glad to announce that I made a good use of that second lockdown by putting together everything Serf-related (even TreeAlgebra, MetaMesa, LabAware, BrainFarm, openBrainTer,…). This means that they’ll be new material for MetaMesa but I’m not sure how to release it yet.

But the real point is to realize a synthesis that won’t be easily lost.
Life is stupidly short, it has never been designed for people to accumulate knowledge and work on long-term projects; that’s a commodity of modern society and long lifespan. Truth is, a virus or a car could take your life the next day and every long term progression will be lost with it.

As I’m living in the fear of dying anytime, I made such an ambitious project (with this blog as a corollary) to give to my last breath the satesficit of having brought something worth it and meaningful with it. As the risk of death, or being unable to complete this project, is real; I decided to realize some sort of fail-safe which consists of a 2.7GB PDF containing 1604 pages of raw notes messily organized and badly formatted (the pleasure of working with LaTeX from so many sources).

This document is in a mix of French and English, it’s not sorted by themes and could be really hard to understand as I still need to produce a cleaner version (but it’ll come in the long run). Although I have some hopes that publishing it will prevent these notes to become fully lost if anything happens; like there’ll be hope for someone to maybe take over and extract the most interesting bits out of it.
It will be published here in 20years from now on, unless I decide to unschedule it but, even if I get to make a clean version, I doubt to ever remove those notes from publication.
This constitute my “dead man’s switch”.

What about other threads ?

It seems I’m gaining more regular readers those days. Meaning if I keep posting “in a rush” before getting to sleep and rework it the next day because one of my sentence isn’t intelligible or I’m missing the cover picture of the post, the readers who will get them “fresh” will also have a more drafty version of the post. (it even happened that I rewrote almost completely a post the next day)

But this blog is also a raw thoughts realm where I try to keep my pleasure of “open brainstorming” instead of “closed reworking”.
I mean I’m starting to feel compel to be more parsimonious and rigorous in this exercise, to better show what I’m capable of, but that’ll reduce my pleasure and my productivity far from the initial lean philosophy of “build it first, fix it later”.

The thread on LYFE, for instance, has 3 drafts pending and I can’t figure out how to get back on it. I’ll probably need to better understand the Gray-Scott model and to give some thoughts on why removing metabolism, which defaults to only one-loop, the homeostasis one, instead of 2 opposite feedback loops, is an ok model. (in signal processing, you can do a lot of weird stuff, but biological models have a lot of constraints preventing you to play with scales of values)

As I said in the previous paragraph, the Meta-Mesa topic really evolved well those last months and I’d be glad to publish something about it but it’s also part of the work I need to put in synthesis and clean up of my raw notes PDF.
Let’s see how this go but, going forward with this document, I’ll probably have old and new topics to publish about in the coming months or years.

Well… That’s it for now! If you want to see more about that dead man’s switch, let’s meet here in 2040! (don’t forget to program the reminder in your robot butler or your talking self-flying car, obviously)

Promoted Post

Sponsored Post Learn from the experts: Create a successful blog with our brand new courseThe Blog is excited to announce our newest offering: a course just for beginning bloggers where you’ll learn everything you need to know about blogging from the most trusted experts in the industry. We have helped millions of blogs get up and running, we know what works, and we want you to to know everything we know. This course provides all the fundamental skills and inspiration you need to get your blog started, an interactive community forum, and content updated annually.

Lyfe, Thoughts

My Take on LYFE Part II: Principal Component Analysis I/II

To understand this post, it is advised to read the previous one first.

Through this post, I will try to get a sense of the 4 functions (or “pillars”) assigned to the definition of Lyfe and eventually start bridging those towards a more information theory-like approach by determining over what those functions act upon and to what extend in order to isolate the key components we’ll need. more like 2 functions because that’s longer than expected
In other words; we have been given the recipe and we will look for the ingredients to eventually apply it for a dish of our tastes.

And, regarding my own job search, I found that EU project named Marie Skłodowska Curie Innovative Training Network “SmartNets”; they are trying to study and model mice and zebra fish brains from the molecular level to the larger scope of interconnected graphs. It’s ambitious and it looks amazingly cool! I cannot postulate in my own country, but there are many interesting opportunities in the neighborhood and they even come with a renewable 9 months contract for young researcher and the opportunity for a PhD thesis in computational neuroscience *giggles*.

Although I have more the vocation than the grades, and my application is currently an underachieved mess, it would feel like a deep mistake if I had to pass on such a cool opportunity bridging neuro and computer sciences while getting researcher experience !

How it’d feel like to be taken for a neuro-computing thesis vs How it’d feel to fruitfully obtain it !

Ok, let’s get back to business! So the first question to ask is…

What is a Dissipative system ?

The most obvious dissipative system I could think of

This is one of those concepts that physicists have been mesmerizing about way too much.
I could say that it is any system that is not lossless (conservative) when energy flows through it; but any non-idealized (or “real”) system is dissipative as reality always have friction so… let’s not get there.

If you really want to get to the formalism, the KU Leuven has some nice slides (to nicely overcomplicate it as Science should be).
The real key to this being:

  • Energy flows through the system, or a part of it.
  • The amount of power (energy per time) is superior to the variation of stored energy by the system (what it gains as energy per time).
  • Energy being a conservative property, that inequality means something is lost; usually as heat.

To get back to my resistance, I apply a power source to it which makes some energy flow through it and, as it doesn’t store electric current, it will dissipate the electric energy as heat energy, meaning it will become hotter. (So does every electric component, as they all have a non-null resistance property; but we said we were only using idealized components).

Non-dissipative system, prosaically called “conservative” system.
Energy flows while always peaking at the same value

BUT… that was much too easy!
Let’s push this concept around a bit. In a conservative system such as the pendulum, we will have our energy flow between 2 states: a kinetic energy and a potential energy, oscillating between a gain of height against gravity or a gain of speed against inertia.

Therefore, in a pendulum, there’s just a transformation of energy. But isn’t it also the case when my resistance changed electric energy to heat energy ?
Here comes the slippery slope!

Hierarchy of energy flows conservation efficiency
Having Mechanical below electrical is not that weird. A rock on top of a mountain conserves well its energy but, if you want the energy to flow from a power plant to your house, you’ll have a lot more loss with Mechanical energy

Think of it that way: there is some sort of hierarchy between the energies. A thermal energy is the lowest form: you only extract power by allowing it to balance two systems with one of them having a lower temperature (less thermal energy) than the other. Then, entropy is maximal, nothing else to do there.

But it is not the same for other forms of energy who could still degrade (usually as heat; in thermal energy). When we have our pendulum energy flowing between potential and kinetic, it actually stays at the same level. These are 2 sorts of mechanical energies, so “in idealized case” no entropy is produced and therefore no lower level of energy (heat). But, if you convert electric to mechanical energy, you will have a loss going back and forth.

This is where I want to extend that idea of dissipative system; it’s not simply “losing” energy as it flows through the system (received > stored) but it’s losing energy level which also accounts as entropy production. Therefore, if my electric energy becomes mechanical energy which becomes thermal energy; my system has a cascade where 2 different dissipative inequalities will apply.

For instance, if I have a robot producing entropy both by processing information (electricity -> heat) and by manipulating actuators (electricity -> mechanism -> heat).

What is an Auto-Catalytic system ?

I can remember my late chemistry teacher explaining that a catalyst is something used in a chemical reaction but then given back. It is not consumed by the reaction but it either allows it or enhances it.
For instance, considering a chemical reaction using A and B as reactant and D as a product :
A + B + C -> D + C

would logically be simplified to A + B -> D, as the catalyst C wouldn’t be consumed.
But this notation doesn’t show the kinetic of the reaction which would better illustrate C acting as the catalyst. There might be an intermediary component AC produced in order to react with B and gets to D; or it might be required to reach some level of energy that the reaction cannot on its own.

As a general definition, a catalyst is something helping the transition from a state to another more stable state without being consumed.

But then we are after an “auto”-catalysis; meaning the reaction will create the condition to facilitate itself.

I could get back to a chemistry example but those are boring and abstract. Instead, I’ll take the opportunity to advert for a strongly underrated vulgarization channel called The Science Asylum that I’ve been heavily consuming for a month with the pleasure of (re-)discovering cool physics concepts.

In this crazy video, he explains how it is possible to terraform Mars with a (reasonably) giant magnet to regenerate its magnetic field (I guess it’s ok to be a little crazy… check it out!)

In the case depicted in the video, it is proposed to use a magnet in order to strengthen Mars magnetic field and gets it to regenerate.
The part not explicited in the video is the auto-catalytic behavior we will end up with.

Think of it that way: Mars is a cold rock with a thin atmosphere because most of its remaining constituent are frozen at the poles. If you allow the planet to augment its atmosphere density (by retaining it with a stronger magnetic field, for instance), the greenhouse gases will start to retain more of the sun radiation. This will lead to a warmer planet surface which will evaporate more iced atmosphere which will augment its atmosphere density which will augment its greenhouse effect… and that’s how aliens built our sun!

Well, not really as that method won’t actually work for many reasons; one of them being that the atmosphere will eventually run out of fuel (reactant) and saturate. But there you got it: that positive loop effect accelerating its own action is the auto-catalytic phenomenon.

Fichier:Enzyme catalysis energy levels 2.svg — Wikipédia
Example of the effect of a catalyst to reduce the energy required in order to transit to a new state

What you can get out of it is that, in order to get a catalysis, you need to move from a given state to a new state which is more stable (meaning it has less energy to release). You start from a static state and end up with a saturation meaning reaction is not possible anymore (reactant are all used or the environment is saturated).
Weirdly enough, I conceptualize it more like the behavior of magnetic permeability.

Saturation (magnetic) - Wikipedia
The permeability µ reaches a peak and declines when the ferromagnetic material gets saturated

An important observation in the catalysis is the increase in local entropy as your final state has a lower level of energy.
The hardest part is to put the “auto-” in front of the catalysis. It has to be a system like a marble that needs a gentle push to go all down the hill, or a campfire that just requires an ignition to keep producing the heat in order to continue burning until it runs out of wood.

What’s next ?

I split this part into 2 sub-parts as it gets much longer than expected.
As usual, I’m trying to avoid too much reworking on this blog and to approach it more like building a stair step-by-step, that means an eventual reworking at the end as I’m not fully sure yet where I’m going with it.

Next time, I’ll write about homeostatic systems, learning systems (if this was an easy one, this blog would be a single post) and maybe compare them to existing experiment in the real-world or to the most general notion of a program (a Turing machine of course).
So far my feeling is that, if it gets somewhere, this technology would be limited to what’s currently done by batch processing in large data systems.
Well, let’s see…

Brief, Lyfe, Thoughts

My Take on LYFE Part I: Expectations and Hype

So I still need to write down the next Meta/Mesa post from my notes, as this tool is still intriguing to me; but my interest shifted incredibly fast after reading this article (in French) which refers to the article Defining Lyfe in the Universe: From Three Privileged Functions to Four Pillars.

So, you know me; strong, sudden and powerful hype that doesn’t last (especially after I wrote a post about a given topic). BUT… this publication is fresh, extremely cool, and people will keep talking about it in the coming years (no joke, check the stats below)

LYFE article: impact progress since publication

Therefore, what is that LYFE (pronounced “Loife”) about?

In short, it is a functional theory of life as well as most general form (Life being a subset of Lyfe) that rests on those 4 functions:

  • Autocatalysis
  • Homeostasis
  • Dissipation
  • Learning

From those, they predict theoretical form of “lyfes” that “life” hasn’t produced. For instance, here below a mechanotroph organism that uses mechanical energy extracted from a fluid to produce ATP similarly to photosynthesis in the green leaves of our plants.

Theoretic mechanotroph organism

In short, it theorizes that a Gray-Scott model that could learn would be a living creature (validating all 4 functions therefore being an instance of Lyfe)

Simulation of a Gray-Scott model

As any engineer, I did learn statistical thermodynamics and I am still fascinated by Boltzmann work, but there’s nothing I could bring that David Louapre, Stuart Bartlett or Michael Wong wouldn’t put on the table (actually, I couldn’t bring anything at all on the avenue crossing thermodynamics and biology if it’s not neuron-related)

But that’s perfectly fine because I am looking at it on a totally different perspective!
I’d like to review their work step-by-step on this blog and produce a translation to computer science based on information theory (which obeys similar laws to thermodynamic… or the other way around) in order to theorize what would make a software component “alyve” (Y can’t wayt for the result!)

And, as my blog is about raw thoughts and freedom of speech, I’ll put some here about the main functions:

  • Learning might be an expandable function/category as I don’t believe just putting some Hebbian rule there would encompass the whole concept of learning
  • The most obvious parallel to algorithm is a Boltzmann machine, but temperature is there a metaheuristic parameter that correspond more to internal state than to processed data
  • Dissipation is not clear to me yet; I believe any running instance should be considered a dissipative system
  • Autocatalysis is the one that gives me to most trouble to put in the context of information system. Would it be like a function that spawns threads as it runs? How does it saturate in our case? Should we consider something like a charge of tasks (sensors, actuators or programs) to be processed?
  • Homeostasis might be the easiest starting point; it should be simulable with a Proportional-Integrative controller in a closed loop system, and is also linked to what dissipation and autocatalysis will apply
  • Could the Gray-Scott model be used to spawn naturally multi-agent systems?

Gosh seems so cool! I can’t wait to start!
Oh wait… I still need to send CV to find a new job… If anyone has heard about a cool researcher/PhD candidate position, that would help :’)

Brain Farming, Thoughts

Pyramidal Neurons & Quadruple Store

On a previous article reviewing the book “On Intelligence”, I was mentioning something I found fascinating; the 3-dimensional encoding nature of fundamental sensing such as:

sound: amplitude, frequency and duration
color: luminosity, hue and saturation
pain: intensity, location and spread

I saw this Quora discussion regarding the definition of Ontology about the following paper from Parsa Mirhaji presenting an event driven model in semantic context that could also be encoded in triple store.

If you’ve already stumbled upon semantic web and all its ontological weirdness, you’ve probably come across the concept of RDF triplestore, or Turtle for intimates, which allows you to encode your data using a global schema made of 3 elements:
The subject, the predicate and the object.

This allows you to encode natural language proposition easily. For instance, the sentence “Daddy has a red car” makes Daddy the subject, has the predicate and a red car being the object. As a general rule of thumb, everything coming after the predicate will be considered the object. The subject and object could be contextually interchangeable, which allows deep linking. Also multiple predicate can apply to a subject, even if the object is the same.
“I have a bass guitar” and “I play a bass guitar” are 2 different propositions seeded which have only a different predicate, and it would be more natural (in natural language) to express a reference within the same sentence such as “I play the bass guitar that I have” (although you’ll notice that I inferred the bass guitar is the same, it is a bit of a shortcut)

If I had the idea to convert my “simple” triplestore to a SQL table; I could say the complexity of this triplet relationship is
Also, subject and object should be foreign keys pointing to the same table as I and a guitar bass could both be subject or object depending on the context.
If converted to a graph database, a predicate would therefore be a hyperedge, a subject would be a inbound node and an object would be an outbound node. That leaves us with a hypergraph. This is why some, like Grakn, adopted this model. Although, their solution is schemaful so they enforce a structure a priori (which completely contrasts with my expectations regarding Serf that requires an environment to be schemaless and prototype-based, therefore I’ll default to a property graph database, but the guys are nice)

Getting back to the initial picture that triggered this post; I found really interesting to further, or alter, the traditional schema subject-predicate-object to a schema:
subject – event – observation
It seems that, in the first case, we are working with purely descriptive (although really flexible) case while this other schema allows us to work on an action/event base. That is catchy as I couldn’t resist apply my relativism: “Oh well, it’s context-based” and to consider that there should be a bunch of other triplet schemas to be used as fundamental ways of encoding.

Making the parallel with the sensing approach; such as pain and its triplet location-intensity-spread, could I also say that it’s fundamentally just vectors that could be expressed by the peculiar structure of pyramidal neurons?
Those neurons are the largest from the different varieties in the brain and their soma is shaped quite like a pyramid (hence their name) with basal dendrites (input wires) starting usually with 3 primary dendrites (up to 5 in rarer cases). It’s a bit like if 3 input trees met to a single input wire (apical dendrite) in order to produce a single output wire (axon).

The Cerebral Cortex | Neupsy Key
Schema of a pyramidal neuron from NeupsyKey

Of course, it is just conjecture, but the triplet approach of any fundamental information, based on perception or cognition, is really tempting and sexy as it works so well.
Although, if pyramidal neurons are everywhere in the brain, what their location and what they’re connected to really makes a huge difference in their behavior.

And this is why I started to nurture, in a corner of my mind, the idea that we could add a contextual field to pick the right schema. This would act as a first field, as a group in graph representation probably, or just another foreign key in a SQL storage, that selects what the 2 last fields would be about.

In summary, a quadruple store would extend any triplet such as:
Context – <SourceSubject> – <TypeOfRelation> – <TargetSubject>
Where the context solves the other fields type. At least, this idea has been written down and will keep evolving until satisfaction.


2020 got Cool Js Experiences in Your Browser

We can say I am easily impressed by graphics running in the browser, even though my 1k+ tabs make them a bit laggy. But the latest demos I’ll share might blow your mind as well as mine.

It has been few years now that solutions exist to elegantly display graphics in the browser to benefit from new technologies such as canvas or webGL.
As fundamental projects for the web, we can mention D3.js, which adds powerful 2D graph capabilities for data-driven documents, such as visualization web-based notebooks for data sciences, or three.js, which provides an efficient API to handle 3D in the browser allowing us to have multiplayer 3D games directly served in the browser without any installation for a while.

As I was looking for a nice way to display data as editable graphs, I upgraded myself with what’s up in this really cool field and what wonders did front-end dev and graphists came up with. Oh boy I wasn’t disappointed !

I still have a crush for paper.js “tadpoles” (really paper.js? You see tadpoles there?) but, reading a repo using the 2D particle physics engine Newton.js, I saw they were allowing an alternative to D3.js (which is not cytoscape)
This alternative is called cola.js (or webcola) and they really got me with their examples :

So much potential with all that cool materials; web-based UI and IDE really have a bright future and I’m really curious of what I’ll build with this and my new interest for React.

Although, here’s the creamy part;
I also came to explore what was new in 2020 with WebGL + Three.js and I spent most of this experience the mouth opened, baffled by the beauty of this experience.

High definition particle fx, 3D sound that you get to control, and beautiful transitions between scenes based on smooth rail exploration just to advert for an opera. It really sold it as the experience, in the dark with good headset, is pretty intense.

Google Chrome guys have been experimenting with WebGL as well, but this “dream” is really appealing; mixing videos and 3D through a little journey in a weird and cool world, they offer us an experience quite unique that really feels like a sweet dream I won’t spoil. They even go further as to open source it and embed an editor to make your own dream world. So so cool!

SERF, Thoughts

Your Glasses Aren’t Prepared !

For a long time, I played around with the concepts of syntax and semantic, trying to make them practical and clearer to me.

Both are approaches to study a message, but syntax is more about structure while semantic is more about meaning.
Syntax is easier to study as it is made over linguistic conventions, such as Chomsky grammatical approach. Syntax is the part of a message that we can all agree upon as the rules are (but haven’t always be) properly stated.

Semantic is what is meant by the message. So its domain is the thinking experience of a human mind. As we cannot model this, we cannot model semantic (or at least, I used to believe it). Therefore the semantic is what’s left in the message once the syntax is withdrawn from it.

Except that there are no clear boundary between syntax and semantic. Self-referential sentences and onomatopoeia are examples of cases where you cannot make a clear cut.
Giving this inability, it didn’t seem scientifically-reliable to use this paradigm and I was therefore looking for something more objective.

I decided to use an approach that was much easier to integrate with digital systems while providing a dichotomy that better fit the problem. Considering what’s widely accepted (like syntax rules within a language more or less mastered by a group of people) is joining a convention to ease communication. And that’s really handy to have something like an origin / reference point (here, a commonly agreed syntax) to explain something relatively to these conventions (in order to talk to a certain public with specific words and images). But outside of human conventions, we rarely can benefit from a reference point in common cases processed through a human life.

Actually, besides really well-narrowed cases such as distinguishing a rectangle from a triangle on a picture, most interpretation problem we encounter don’t have a reference at all to be used in order to develop a comparative description.

Take the case of the sandpile vs grains of sand problem.
How many grains of sand to add to get to a sandpile, or how many to withdraw to just have some grains of sand ?

Then I guess you also need to scale the idea of how large a sandpile is to your expectations.
No referential is universally agreed upon here, although we can make a fuzzy idea of where’s the border between some grains of sand and a sandpile through polling people. That’s a way to extract a convention, or knowledge of representation, and answers would then be about right under some given precision / expectations.
Just like splitting syntax and semantic, that requires the work of modeling the language then normalizing local grammar conventions and words to get to a normalized language. In some languages having no neutral gender, such as French, this grammar normalization got a new impulse from gender issues regarding parity and neutrality of nouns.

Floating Boundaries; reading information through Meta / Mesa lenses

Similarly to considering sand to be either a quantity of the unit (grains of sand) or as a composite unit of its own right (sandpile), we can say that one of the unit (sandpile) is composed of the other (grain of sand).

I established that there can exist a situation where the grain of sand is the most fundamental component element of both the “some grains of sand” and the sandpile.

In a different context, the grain of sand could become the start of my exploration towards the most fundamental component. I could ask: “What is the most fundamental component of this grain of sand in regards of atoms?” and there we will be using a language that encodes a hidden knowledge of atoms, classified and named after their number of protons, like “silicone” or “carbon”. To get more detailed, I could use a language where we even differentiate how those atoms structure themselves, such as “SiO2“, thanks to a hidden theory of how atoms handle electrons.

I could also desire to find something without a giving context. Let’s say I want the “most fundamental component of anything that is” and, if I believe that matter is all there is, then I’ll end up looking for the most fundamental particle or field impulse in the universe. If I consider patterns to be the essence of what there is as a descriptive or a behavioral characteristic of matter, which then is just a support of information, then I’ll look at fundamentals such as mathematics, theoretical physics, computer sciences, etc.

With that approach, you can build your personal ordered list of what’s fundamental to you. Reverting prioritization means looking for the most meaningful/epic case instead of the most fundamental; then you’ll also get a personal classification.

I will call this “outer” bracket the Meta level of reading, and the most faithful one is the Mesa level of reading; because Metadata are data that refer to data, and Mesadata are data that are referred by data.
But those are really just two levels of reading information.
Mesa is trying to be large and detailed enough to be faithful to the source material with significance, accuracy and relevance.
Meta is casting the faithful Mesa representation to a schema connecting or expected by the system knowledge. That is in order to produce an enhanced interpretation of the data that is lossless but more relevant to the context of the process.

The pure-Mesa

This Mesa / Meta levels of reading could be illustrated through colors.
We can agree that, up to a certain precision, the 3 bytes representation of a color is enough to encompass everything that we can experience as a color, let’s call this a “pure” mesa point.
But, if we have to account for shapes, then a single pixel isn’t enough to experience much. It is still a mesa point but not precise enough to capture the shape. We could call it “under-pure” and, in extenso, an “over-pure” mesa point would be something that has significantly more precision than what is relevant to capture from the source material.

Then what is the color “red”? With one byte to each color in the order RGB, #FF0000 is considered red, but #FF0101 is also red as an imperceptible alteration. Is #EF2310 considered red? And what about #C24032 ? When does orange start?
There we are back at our grains of sand / sandpile original case; there are no clear boundary between red and orange.

Actually, the visible spectrum of orange + yellow is not even as large as half of the wavelength band we call green.
A mesa representation (based on physical sensors) can be really different from the meta representation (here a symbolic system where colors are classes, with sub-classes, and a logic based on primary colors (classes), complementary mapping,….
The same can be said about sound, but its logic is temporal instead of combinatorial.

Are there Symbols without Expression ?

Let’s take the number 3.
By “taking it”, I mean writing one of its expressions. I could have written “three” or “the result of 1+2”, it would have required a bit more resolution but the result would be the same.

Have you ever observed “Three”?
You obviously already experienced having 3 coins in your pocket or watching 3 people walking together on the street, but you’ve never experienced that absolute idea of a 3 such as to say “This is him! It’s Three!”. But you might have said this about someone famous you encountered, like maybe your country’s president.

Well, it’s obvious! you’ll claim after reading the preceding sections, my president is pure-mesa; (s)he’s an objective source of measurements present in the real world, so I can affirm this. But I cannot measure 3, I need to define arbitrarily from a measure so it might be pure meta, right?

Well, almost! Your president also has a label “President” (implicitly Human, Earthling,…). This means he embodies function that are fuzzy. There’s no embodiment of the notion of President, just people taking the title and trying to fit the function. Meaning a president is a composite type; (s)he has Mesa aspects from its measurable Nature but also Meta aspect from the schema of President (s)he’s trying to fill.

But is 3 pure-Meta?
I thought for a long time pure-Meta wasn’t a thing because you couldn’t escape representing it, therefore mixing it with a Mesa nature of the representation. So there’s that need for every symbol to be expressed in a way or another, otherwise it cannot be communicated and therefore doesn’t exist. That might be where I was wrong.

My three doesn’t require to become a thing to exist per se.
Through this blog, I proposed to approach the intelligence by modules which usually have a producer(/consumer), a controller and many feedback loops. And 3 has also producers and consumers specialized to recognize or to reproduce varieties of three in our nervous system. It follows that we can recognize, and act according to the recognition of, a property of 3 (elements) without mentioning it.

So, even if I cannot express the absolute idea of a number, such as 3, or a color, such as red, I can at least return acceptance, or deny it, over the seeked property.label (classification) which means I could at least tell if the “red” or “3” properties are true or false in a given context without being able to express why.

Therefore 3 exists both as a testable property and as a symbolic expression, but defined from a property.

Multiple Levels of reading

That’s leaving us with: Mesa could be made as accurate as it is physically measurable, then Meta can be made as meaningful as individuals could make it. We find back that idea of objective (comparative) referential and relative (non-comparative) referential. We could also say that what is Meta has a given data, what is Mesa has a given schema.

What becomes really interesting with that tool is to be able to work, not only between internal and external representations of some entity or event, but also to be able to work between different levels of representation as we grow shapes from colors and 3D movements from shapes.

I believe that modeling the knowledge of a human requires to be able to have at least 2 levels of readings that could be slided across multiple in-built levels of abstractions. One to define the unit, the other to define the composite.

After all, aren’t we hearing the sound wavelength and listening to the sound envelope?


Are AGI Fans Cuckoo ? or An Inquiry into AGI Enthusiasm and Delusion

There’s a regular pattern I can see between AGI enthusiasts; besides being all hyped for a human-like intelligence, it’s also to mix literally everything as correlatives of their solution/discussion within God-like delusions or the Universe and fundamental physics; catchy ideas like the quantum mechanics or a general unified theory of information are common in papers of people proposing to revolutionize AI.

We could say those are just dreamers, but it is a more common pattern than that. One of my favourite example is Georg Cantor. He is a brilliant mathematician born in the mid-XIX century. He brought us the set theory going further than its simple use for classification, he introduced tools to manipulate sets such as cardinality and power sets. He was probably the first human being experimenting the idea of multiple infinities producing multiple infinities, as the Infinity was still a philosophical topic and its multiplicity was hardly discussed.
He attributed most of his genius work to God’s inspiration, coming from a pious family. Eventually, he became disillusioned as he lost his muses, felt abandoned by God, got a divorced, and died depressed and alcoholic.

Closer to us, we can talk about Grigori Perelman who solved Poincaré’s conjecture, one of the millennial problems (the only one solved yet over 7!) in the early 2000’s, but it took years for multiple experts to validate his work.
He received a Fields medal and the Clay Institute price for his discovery, although he refused the $1M prize on those words:”I know how to control the Universe. Why would I run to get a million, tell me?”
To understand this declaration, you have to know the character. He is recluse, distrustful, pious and studied for a large part of his life “mathematical voids”, leading him to solve Poincaré’s hypothesis. He assumes those voids can be found anywhere in the universe and, as he also considering mathematics as the language of God (a more common thought in the math community than you might think), he believed he reached God from the mathematics. He even published a less-known proof regarding God’s existence after setting himself apart from the mathematics community.

Again, a great case of mathematical success, bringing highly valuable concepts from the deepest paths our brain can make to a set of verifiable propositions built on top of the mathematical literature. Although, to get that loaded in your brain (ergo; understand it) you might take several years of studying, assuming you are already a PhD in math.

I, myself, got into this blog because Ray Kurzwell was spreading this weird nonsensical idea: the “technological singularity”. I was hugely skeptic to the tale of more power will lead to AGI without considering the structural, and fine-tuned modules, problems behind describing human intelligence. I thought this smart guy should really know better than eating vitamins to live until the predicted 2050 for meeting the AGI.
As Wittgenstein said; if a lion could speak like a human, you would never understand what he says because he perceives the world in a different fashion (Hayek would call it “Sensory Order”)
Although I eventually failed into those God-like intoxicating thoughts as well, from a different cause, and took a bit of time to get back on real sounding grounds.

So, where do I want to go with that pattern?

Well, you already know my view on intelligenceS as a pile of modules built on top of each other with high interdependence. The same way apes who weren’t in contact with snakes didn’t evolved 3-dimensionnal colors, we might be missing an intelligence there.
Think of it this way; a frog who’s jumping in hot water will jump out as fast. But if it jumps in cold water that slowly becomes hot, it’ll stay and cook.
Are we building up to this madness the same way? Because we lack a sense of risk and moderation so we run into that “illumination” where the secret of the human brain, God’s existence, the Universe laws, and others become a whole in a big delusion?
Aren’t we at risk of frying, just like the frog, as we explore what could be top-cortex ideas that are moderate by no other intelligence? And, just like a calculator dividing by zero, we end up in an infinite explosion of ideas encompassing the most mysterious concepts in our mind. Like a glass wall we, stupid flies, keep knocking because we saw the light; the most persistent ones get stunned by illuminations and other psychotic-like ideas. Eventually knocking themselves out…
I personally see this intellectual phenomenon as a wall bouncing back thoughts thrown at it. If reasoning goes too far in such huge realm of possibilities (like tackling the thing that encompasses all our thoughts) our thought thread is spread in nonsensical directions, catching whatever grand ideas were passing by. Maybe it’s even a too large order for us to consider, like multiple infinities nested in each other were for Georg Cantor.

Maybe, at the age of overwhelming electronic possibilities, we should be concerned enough to analyze this and assess the risk for us humans?

Brief, Thoughts

The wel-maneered paradigm: a raw thought

In a well-maneered paradigm, well-éducated bots are educated to minimize their “non-compliant” responses to that paradigm.
Minimizing it offers more availability to process more paradigms for non-compliances. Those extra paradigms cannot be disconnected or unslotted, they have to keep living or their processing power will shift towards more present paradigms.
Minimizing processing power for low-utility paradigms allows to reallocate the processing power to high-utility paradigms. This processing power is instanteanous and parallel, such as multicores that couldn’t virtualize by acceleration.

The second assumption is; the tasks are ordered by vital importance (as per selective evolution, or other mecanisms). That way, if a task requires a sudden rush of processing powers, it might not only takes it from available ones, but also from less vital tasks; causing a “lose of focus”.

Let’s assume a disturbance in our well-maneered paradigm. We introduce an ill-educated bot.
Practically, our bots are communicators in both direction with internal state space. Under this, there are many internal states evolving according to the input and internal loops values and observed or (internally) executed transitions.

The consequence is to cause an overload of the well-educated bots. As the well-maneered paradigm of the bot is different, assuming it is consistent to be formalized/stated, its behavior will cause a lot of responses to the well-educated bots [zone]. This will make a lose of focus, and might trigger agressive behavior as continued interruption is weakening other tasks performance provision. (reject of the bot in order to re-establish the main focus)
The other consequence, if the disturbance persists, is the lessening of the non-compliant answers. This enlarges the current well-maneered paradigm to less distinguish between these.
[fight, flight or adapt]

Although, some sort of distinguishment might provide disturbance as another paradigm to integrate?


Open Questions on “On Intelligence”

Years ago, I read On Intelligence from Jeffrey Hawkins as one of my first introduction on brain-derived artificial intelligence.
Far from the statistical black boxes, he had the ambition to explain a surprising pattern seen in the neocortex and extract an algorithm from it.

There are a lot of observed patterns in the neocortex, the most scrutinized blob of fat, and most of these are functional divisions; we know for long that the brain is split between functional areas, from localized brain damage and, more recently, through the study of synesthesia and the technical possibilities of imagery displaying live activation patterns in the brain.

But what J. Hawkins presents in this book is something that lambda people have never heard about if before;
a pattern that is not localized but repeats all across the neocortex, a pattern that can be observed in every centralized nervous system but has a specificity for humans…
And this well-sold feature is nothing less than the layering of the neocortex. Which really makes sense.

As our skin is layered, we have different functions orchestrated at different levels. That also means a 1st degree burn is not as bad as a 3rd degree burn on your skin. Even damaging your epidermis won’t even hurt as it’s not touching your nerves or anything really alive.
And you can probably expect a similar importance in variation between neural layers. Except it doesn’t have 3 layers like the skin but up to 6 packed on really thin sheet.

Showing six layers of cerebral cortex of control group; molecular layer (I), outer granular layer (II), outer pyramidal layer (III), inner granular layer (IV), inner pyramidal layer (V) and polymorphic layer (VI). These layers showing acidophilic neuropile and rounded open face nuclei with prominent nucleoli (→) of the neurons and also of neuroglial cells without properly seen cytoplasm (▶). H&E, (A) ×100; (B) H&E, ×400.
6 layers observation of the neocortex

This is the big announcement; we have 6 similarly organized layers all across our functional areas. Except the motor parts having 4, and other mammals having only 4, we are the 6 neural layers monkey !

He states then the importance of pyramidal neurons in this organization, not mentioning the glia cells functions at all, and ends up in a hierarchical representation where the hippocampus is atop.
A point where he insists the most; those divisions are functionally organized in volumes, or “cortical columns”, carrying the process unit and communicating with other cortical columns; which explains functional localization. The extracted algorithm idea was named “Hierarchical Temporal Memory” and hastly led to Numenta which produced white papers, open sourced and seemed to have an online machine learning algorithm that detects irregularities through a stream, with a correct result rate but not exceptional.

I wouldn’t depict myself the brain that way today, but back then it left me with a deep impression and the will to clarify as much as I could.
As I tried from the neural approach, I learnt painfully that we’re really lacking functional studies of complex behavior from “neural elements” compound to go further in this direction.
Now, I have much more perspective on this book to ask other open questions I would like to hear about;


One of the first thing that stunned me is the absence of architecture plasticity. If there are hierarchical patterns supposed to form from the “blank” neural sheet of a newborn, cortical columns is not enough to define functionally which layer requires which nodes and how gates are shaped.
Similarly, I have never seen a machine learning algorithm optimizing both its nodes and edges; we usually approximate the size of hidden layers (the number of hidden features) to make it good enough for learning to happen. The brain seems more to relate on time-precision, dynamic hierarchy and redundancy.
What do you think we lose in that mitigated approach? Do we have already algorithms optimizing both nodes and edges in a learning network? How to encompass for the hierarchical approach?

I am not sure I would consider the hippocampus as the top of the cognitive hierarchy.
In neurogenesis, we observe cortex builds on top of another. We have also some bootstrap period, the youth, required to progressively develop and integrate structured data processes. As such, the physical world is both the first processing system we build upon and our reference to tests hypothesis.
Would that be correct to consider that the hard world acts at both ends of that hierarchy, making it more of a cognition cycle ?

Regarding the importance of pyramidal neurons, there are numerous of them especially in complex sensing areas like those related to vision and audition or even spatial representation. The nature of those signals are vibrations that are divided to much more familiar categories like a color or the pitch of a note.
And there are some regularities as those can be represented in a computer from 3 dimensions. For instance, we can represent those senses with the 3 perceptive dimensions;

sound: amplitude, frequency and duration
color: luminosity, hue and saturation
pain: intensity, location and spread

Could this way of perceiving our world be caused by pyramidal neurons? Is it also causing us a part of subjectivity? Isn’t there a similarity between those 3 terms across senses?

Brief, Thoughts

The Curious Task of Computation

There are really 2 shapes of reality:

The physical world, with its realm of mysterious relations and this weird capacity to be the same for all of us, and the sensory world, which is a really intimate experience extremely difficult to describe (especially if you don’t share much DNA with us Humans).

As we’re not that good to make something objective out of the physical world with our subjective senses, we elaborated Physics; as a way to describe the physical world based on how it act upon itself. Meaning we can only correlate physical events with other physical events; building models made of wavelengths and energies. Things that have no meaning to our sensory world.

From the second one, we elaborated psychology; as a way to describe the sensory world based on other sensing agents. Meaning we try to correlate ideas based on interpreted experiences and derive models. Similarly, it’s the sensory world acting upon itself, but with the extra difficulty of accounting for a variety of opaque sensory worlds. Even if the architecture is genetically similar, we cannot see what’s inside or understand it from the physical world perspective.

And, in-between, as a way to match those really odds worlds, there’s the curious task computation. This domain is a way to match bidirectionally physical events with sensory events; to have the information from the physical bit to an intelligible idea or function, then back to the bit.

Giving this, my point is a question you should ask yourself:
Are we really filling the objective we gave ourselves?


Addendum: this is inspired, but different, than Hayek’s analysis in “The Sensory Order”. In his approach, he talks about “orders” instead of “worlds” used in this article.
Comparatively, in Serf Layer, the “physical world” is “perception” and the “sensory world” is “interpretation”. Which I believe is much more intuitive to expose them functionally. But Hayek wrote this in a much different era where computers were still a post-war secret and the Tractatus Logico-Philosophicus was a recent book.