Brain Farming

Brain Farming I : Why ?

After I started digging into AI, I knew it was about classifying tree leaves or digits. Though, recent AI projects gave awesome results , it requires expertise and computing power the common people don’t have. I was really all about the model, seeking the next step that could ignite a new era where those functions could be correlated to assemble into a form of being.

I then digged into automatons, neurology, psychology, philosophy, economy, biology,… damn insects can be interesting!
I melt my brain thinking about everything as an intelligent system and ended up swallowing so many things my thoughts were a mess. Those obnoxious thoughts were put away and I started this blog to sort everything clean with description, model and experimentation and to share documentation and personal insights.

I still couldn’t keep it up. When I saw everything as an intelligent model, while lacking the sufficent knowledge to discriminate those ideas, I had too much to write about without being able to focus on a model I clearly saw and could thoroughly test, adjust and compare. This let just drafts accumulate while not producing any new posts and I don’t believe in the quality of the previous posts.

When I started this interest, it was for the funny idea of making small AI, like a smart tamagotchi or a talking virtual assistant, not to adjust weights to fit a labeled dataset heavily pre-treated. When I understood what it was all about, I wasn’t a “singularity believer” and I’m still not, so I had no choice but to resign.

I started to dig in another direction. Even qualifying my ideal job as “brain farmer” for the pun. Though I knew it was a bit desperate as smarter people are paid to work on the issue but I related on a small providential insight or successful model that could be “it”.
Though, as you know, this blog never got any success.

Going Back to the Fun

I had to put aside this hobby for a while. But, recently, it appeared to me I could still manage to focus on this domain and getting some actual results.

The most prevalent tragedy of my approach was to be hardly defined, being congruent with anything and nothing at the same time. I thought more and more of neural nets as pieces of a puzzle to be used plugged into a more general and central interface, AI or hard coded, that could get the most of those network combinations. But how to test and define all the combinations that could work or not and the subsets of neural nets to be used ?

This insight caused me to talk about “brain farming”. Surely, you don’t use the same network for shapes recognition and for words processing, you use probably a CNN and a RNN. I then tried to find a generalized way to generate network topology but even adjusting the hidden layer is already quite hard. I still have some thoughts on matter, such as systems with dynamic internal state generating networks, but it still doesn’t seem to be the right way based on the difficulty of modeling.

Then it seemed more obvious, I had to inspire myself on computes. After all, those networks are just a bunch of memory, coding the internal function, and acquiring processing power from a central unit; the network is processed, no the neurons.

I already accepted the idea of specialization and standardization, as it is a requirement if we want to program those AI, but it looks like it will appear in the way you combine those networks. Therefore growing specialized combination map and processes for achieving a given goal based on processed data flow. A motherboard that is designed to use the neural nets outputs, or other data flows, in the most meaningful way the goal its designed for.

The interest is abstraction. I could program something as weird as:

if(!vision.has{blue line} && !(vision.has{cat face})
    state{joy}.increment();

Where vision and state are 2 functionalities of my motherboard describing the part relative of image and video processing for the first one, and an internal state functionality for the second one.
What’s between brackets {} is a module like a neural network. It’s a black box, I don’t know how it does it but it’s supposed to satisfy the “blue line” or “cat face” slots on my motherboard by reading my data input.

That should be way more fun to program !

Operating on a New World of Data

Though the example is not meaningful, it could definitely empower us. As I’m working on an openCV application, I’m doing again what others have been doing before; learning heavy stuff to identify a simple line or a rectangle.
Why is there no plug-and-play module for that ?

We are already in a world where data and datastream are easy to use but the technology to get meaning out of them is out-of-reach, though some great frameworks or API, we’re still not in a “package manager” era of AI or, in a more general way, data processing.

Therefore, starting from september when my openCV project will be over, I’ll give a focused goal to this blog; studying the problem of designing those motherboards that I’d prefer to call “brains”, in order to become a brain farmer.

Those brains would be generalized in a sense, as every printed board is, while allowing another way to program and use datas. Those are not physical, allowing really complex machine-designed architectures, and our CMOS are whatever you would want to plug-and-play. But, if it’s a machine learning function, it might be already fine tuned and they could exist for so many purposes that designing a package manager system is also an essential step to make those brains useful.

But everything else will come soon…

 

If it gave you any insight or you want to start a discussion on this matter, please do share in the comments.
On my side, I think 2 months will be really long before being able to start working on this idea so, before leaving WordPress for openCV, I’ll give an extra.

Teaser

Being able to pass the neural networks from discrete time to continuous, we’ll need a sample frequency, limited by the computing capability, but its usefulness can vanish as we seek for rarer treats. We can balance the computing power this way but we won’t be able to fade it away while varying the frequency. We need another variable for that so let’s define its boundaries:

  • G (Gatherer): acts purely on planned frequency, transmitting the signal independently of any trigger
  • H (Hunter): acts purely on trigger, it reacts only if the signal is meaningful to be transmitted

Between those, there are several shades of behavior. Though, we have to give a computing reality to those: a G behavior is triggered at clock frequency, the output can therefore be null, but it gives a constant response, its triggered by the system internal clock. While the H behavior is the opposite tendency; its triggered by the data state. To know the data state, either we relate on G units that will switch on our H units, or we have to let both read data at max computing frequency and differentiate response state in a ternary way (null, 0,1).

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s