Problem statement – Is AI market like space market ?

I wanted to discuss a bit the product risk iteration path of my lean canvas, both to get more familiar with the tool and to better states some motivations behind it.

A simple analogy of current AI market would be to compare it to the market that put items into orbit.

You have the big professionals that have been there for long, paved the way at the day 0 and are capable of producing large and expensive rockets, though fighting with millions the technical difficulties.

You have the small entrepreneurs; taking a segment of the market, trying to make a specific solution way better and cheaper than the big providers, some might even try to become direct concurrent with them.

Then you have the amateurs: tips and tricks, discussion online; this is really small budget but a lot of passion.

But the analogy breaks apart on a decisive point; the metric is clear when you want to put something into orbit. It is not when you try to make AI.
The way to measure a successful AI is done on standard dataset; the expectations to be fit are therefore the data analyst responsibility.
How an AI could multitask if it doesn’t even have the responsibility of fitting its expectations ?

Data is even more important than that;

What digs the gap between amateurs and professionals in rocket technology is the fuel cost. The more massive your rocket is, the more fuel you need to carry. The more fuel you have to carry, the more massive your rocket becomes. It is therefore really expensive to put rockets in orbit for amateurs.

In AI, data is the fuel. It needs to be diverse, realistic, adapted to given case, capable of encompassing user behavior, labelled (deadly important if you do supervised learning), etc. But, most important of all, the computing power to train in a realistic time over those huge dataset to extract rules general enough.

A good promise for the latest is the trend over transfer learning. It’ll help take networks as complex as alpha go zero, that requires dedicated and expensive hardware to train, and make “low resolution” copies of it.

It’s a bit like, if NASA improves greatly its rockets, amateurs will be able to create cheap almost-as-good copies of them. It’s great for hobbyists, but it doesn’t propel innovation.
Couldn’t we find a way that enables modular and diverse AI? Like embedded spaces as standards that can be spread and connected in diverse ways, a bit like we orchestrate docker containers in modern application.

How could we move from a rocket market to a fish and bread market?

This is quite a haunting question. At first, it seems idiotic seeing the amount of data, expertise, computing power, and so on required to train a useful AI by today standards.

But, unlike rocket science, we can easily build tools that get us a bit closer to orbit. Though, as measure of orbit is fuzzy in AI, so are the tools we use to get there.

It means there are no standard way to put your product out in AI, which seems even harder for people that are out there with a simple high-level business process they’d like to implement and, at some point, requires face detection.

So what about the business prospective? To move away from a rocket market, we need to render large and specialized companies developing AI services obsolete.

One way could be to empower medium companies to become as efficient to provide AI services. AI tools, both community and GAFAM provided, are getting to a point where creating and training the deep neural networks is trivial. Architecture, data analysis, data sets and KPI are much more of a concern today.

This is still a challenge today, and it’s a lost cause to provide such requirements to mainstream users. Another approach is standardized trained tools: like Facebook fastText or Google SyntaxNet Parsey McParseface.

Those are unspecialized steps towards orbit: just like Bootstrap for HTML, Spring for Java, Boost for C++,… it provides you with already trained tools to build on top of.
But could we make it a thing to keep building on top of ? Could we make those tools abstract modules to be used in BPM development ?

In fact, could we make those modules as simple and abstract as they become standard pieces of any development and widen the territory of medium companies and amateurs ?

On my own, I’m also deeply curious about how far we can go with a vector representation and could we build up a new kind of algebra that handles things way more complex than empty set numbers ?

Brain Farming, SERF, Thoughts

What’s up for 2019 ?

Tamagoso – an idea that might be refined later

As mentionned in the proposition article, it’d be nice to have it developed as the dependencies of a complex agent. I presented previously the ideas of botchi and the serf layer, which have different but compatible purposes: one to make some AGI-compliant (yeah, I know, the G is too ambitious) UI for an agent, the other is way to process data mimicking what you’ll expect from a turing machines for automaton. I mixed those two ideas at some points into Tamagoso. From “Tamago” meaning egg in japanese, and an obvious reference to 90’s game; and from “so” meaning layer, though there’s an union on the o and I’m not sure of the prununciation)

The idea is then to use an UI based on Tamagotchi design to monitor the agent health, as well as instructing it from different media sources and to apply different set of rules in various contexts. A growable, pluggable, user friendly machine that could be developed up to competing with each others? Couldn’t we think of bot battles on mathematical proofs if given a mathematical reasoning corpus? Or in a street fighter way if given a game environment ? The idea is still the same; evolving high-level agents for multitasking, but in a community friendly way. Reason behind is my belief that the world is hold by, not one, but multiple truths: some reasoning more or less with the populations. Behind the established facts, everyone has a cohesive approach to their interpretations and none can provide all the keys to interpret those. So should the approach be plural.

That’s why this project might be put along the platform proposition. I think we’re still lacking a lot of tools to get there, and a proper way to handle trained neural nets as standard modules is one of them.

Is AGI Architecture a thing ?

I got to make sense of Solomonoff’s induction recently and, while having no wonder for the thing, I ended up with this wonderful interview of Marvin Minski on the top-down approach to AGI after consideration of this induction principle.
And that still resonate in my mind as I ended up considering, a long time ago, that we were more a middle layer from which both top-down and bottom-up approaches were doomed to building further away from the essence of intelligence, instead of dissecting it.


A top-down approach, if it’s truly feasible, would be the world of architects specialized in AI. As my craziest dream is to become an AGI architect, developing the AI architecture seems to be the right path.

So could we really take an AGI down to a machine capable of finding the smallest turing machine that fits a task? I’m unsure it’s enough; especially as we know perception is relative to evolutions, and some human-based parametrization will be required for AGI. Though that’s probably more describing the limit of what AI can make: this induction determines the length of the simplest way to get a task done. That’s not human clumsiness and risky shortcuts to get to a “good-enough” result.

So AGI architecture still needs better requirements, but AI architecture could emerge.
Also, I see here, here and there transfer learning trending. Maybe there’s a sign architecture of embedded knowledge is gonna be the next step ?

Platform Embedded Spaces – First use case ?

I would love to start with Tamagoso, but it’s so incredibly freaking difficult and I’m still looking at the mathematical spaces and operators, as well as the concrete cases, we could get out of that. Still documenting, still thinking, I need a simpler case to start something that ambitious progressively. I got a lean canvas, a quick look at the technologies, and that’s it; I’m lost and don’t even know where to start my UMLs.

So maybe a case where I can get more into a trial and error approach would be best. I was thinking about some really 101 agent that can have a nice purpose and it seems we could do something of educational value:
A student might miss some parts of the courses and, while it is unclear when teachers are still discussing notions, it snowballs quickly with advanced tasks depending of those missed fundamentals. A better way to detect it would be to seek for patterns in each school student exercises by casting their mistakes from different source materials to different knowledge spaces.
A hierarchy and classification of those spaces can help to create a reading canvas of the overall student performance. From higher generalization features space, we can retrieve a general student pattern over multiple high-level skills, and get down to the more precise weaknesses. That way we can prevent the student from getting into a snowball failure later from missing notions initially, instead of leaving it undetected until the problem gets raised and it’s too late.

This could therefore be a pedagogic tool, but finding dataset and study cases won’t be easy.

Brain Farming, Thoughts

Proposition for a Platform for Embedded Spaces

This is the initial version of a white paper I’m working on for developing a platform that distributes trained neural networks and grow a new use for them. The idea is that the right approach to make a functional use is to grow traditional symbolic logic on top of the spaces produced by deep connectionist logic. That way, we can express business cases from symbolic logic down to connectionist representations of its states and values.


White Paper – a Platform for Embedded Spaces

A proposition to build standards on top of deep neural nets and empower AI architects towards a new wave

An inquiry into the 3rd wave

From the business perspective, there is a first wave, made of expert systems, and a second wave, made of deep learning networks, the rest are considered technologies immature for market, that bubble in the head of AI scientists.

Expert systems were mostly built from huge amount of complex code heavily documented and aimed at reproducing what humans built from experience in a given topic.

Deep learning algorithms are blackboxes that require heavy amount of data carefully prepared. It made a huge leap by ditching complexity over a connectionist approach

The first type is costly to produce, require complex expertise, long waterfall development, is not easy to rewrite and doesn’t easily fit modern development practices : microservices, devops, lean,…

The second cannot be easily implemented by companies ; the system is simple but great results require complexity from features optimization and training, which tend to market deep neural networks as services provided by few specialized companies.

And, while the second wave is a nice improvement over the first one and helped get results into new AI topics, it doesn’t cover all the topics expert systems can treat, like interpretation tasks in natural language processing.

Getting the best of both waves from the business perspective would be to have an AI :

– That can abstract code complexity through connectionnist approach

– That keeps a transparent and modifiable architecture based on symbolic approach

Which is already a difficult task as the the connectionnist approach is pretty random and hard to make sense of (blackbox) while we expect something readible based on intelligible symbols (transparent) to be the access door to the business logic implemented.

On the technical perspective, we can get the following lessons : for the first wave, we need loosely coupled architecture based on interchangeable modules and the ability to spread workload. But there’s no magic solution to reduce business logic complexity when input and behavior require a lot of nuances and tests.
For the second wave, the trained AI market from amateur to small companies is null. The generalization of those networks is poor. But a huge variety of network topologies exist, and a lot of different frameworks to implement them, which makes this wave extremely prolific, but also hard to encompass. Scientific publications also keep booming towards this modern gold rush. But we still lack an obvious ingredient ; even if we solve the training problem, which is not a realistic statement, the absence of standards block us from any communication between machines without a custom semantic interpreter. We ignore the training data sets, the results, the specificities of the trained network and many important information to weight in which trained net is the best to solve a developer issue.

That looks like a dead-end for seeing a spread of trained neural networks in the second wave.

Our proposition

Embedded spaces seem to be the key towards a better communication between the symbolic and the connectionnist logics.

They link together words, styles or items with vectors, like Word2Vec or Style2Vec approaches, and those spaces represent of their own (like a namespace). Though their ability to encode and decode a symbolic value allow them to cast a representation in multiple spaces and find a projection in-between (like multiple implementations of Word2Vec) which lessen the needs for semantic or ontology.

Therefore, we would like to propose a platform, just like a package manager, that will allow data scientists and AI developers to broadcast freely, or eventually to buy and sell, their trained spaces as common langages.

Those will run from containers, as we need a standard approach to encompass the plurality of deep neural nets, and its orchestration can be done on local servers from a modern solution like Kubernetes or OpenShift that will handle the workload and the microservices approach.

From all those spaces, that are actually trained neural nets well defined in a common register, we can establish relationships based on symbolic ; just like a word has different meaning in different contexts. Those contexts being the multiple spaces that can apply consistently in response of an input.

Those spaces are loaded in memory as we need them, balanced by an orchestrator, and that’s how we will use them : as the background of a working memory for machines.
On those spaces, will draw few but rich representation of data to encapsulate all the details related to the given information in its interpretation by the system. Then the working memory unloads its results to standard storage memories or user interfaces.

The point of this approach is, for an app data flow, to have a node where every known and relevant information is available to make the best out of a new information (adjusting interpretation or knowledge base, for instance). To do so, trees, stored on a bus, are drawn on top of those spaces. Those can represent the current world knowledge of an agent, the flow of a press article, the behavior of an individual on cam, etc. Scaling down those trees to patterns, allow the system to pass compressed knowledge or, the other way around, to determine prediction trees from initial patterns and conditions.

Those spaces allow standardization but also nuances to the interpretation, as we transit from programming discrete enum values, for defining states, to programming points on continuous spaces. This embbeds more information than enum but it also produces more possibilities.

Like the well-know Word2Vec result : King – Male + Female = Queen

Designing a langage to use embedded spaces

Going further than those {+,-} operators on vectors, embedded spaces could be further developed to support more subtle operators, like union, intersection, exclusion, or more complex operators like integrals and derivatives. Sets of points could also have different complexities, as their relationship with the symbolic values can be, or not, transitive, reflexive, symmetric, generative, etc.

On another approach, Style2Vec shows us we can embbed different features from the same symbols. Instead of making a space that embbeds shoes and dresses together, we can have a space that embbeds dresses with shoes that match them well.

This bring context nuances : should I group them by style or by function ? Meaning I can use either a space where dresses and shoes go apart, or a space where shoes and dresses get closer as they match better.

At this point, it’s interesting to consider the different use cases. The current consideration is of one agent that centralizes a lot of knowledge while running this as threads of containers managed from Kubernetes. Its large but short memory will allow it to grasp complex knowledge and simulates continuity through complex tasks ; looking consistent to the user. On this continuity, it is expected to plug higher and higher logic schemes to grow a hierarchy of interpretations.

I’m not sure yet what could be the business cases, beside extended current system capabilities, but I’d like to explore it as a bot assembling project. Getting higher behavior, like empathy from casting sentiment analysis in an « emotion space », and developing higher cognition into trees of interconnected spaces that match the multiple contexts from a press article. Most importantly, I’d like to define how it should interact with an user interface, a service layer, a database and a knowledge base.

But, at the end, this will require a common language to make those modules express what they can do, another language to interoperate those modules, and probably even others for the behavior of containers management or BPM scheduling. I still need to grasp a lot from that topic and reduce its apparent complexity, so it’s an ongoing topic that’ll evolve through versioning.

The platform to distribute those embedded spaces could be financed from a percent over purchasable spaces. Docker platform could host the containers for now, and extra information such as licenses and standard api descriptions, as well as the docker container url, should be provided through the platform register.