Tamagoso – an idea that might be refined later
As mentionned in the proposition article, it’d be nice to have it developed as the dependencies of a complex agent. I presented previously the ideas of botchi and the serf layer, which have different but compatible purposes: one to make some AGI-compliant (yeah, I know, the G is too ambitious) UI for an agent, the other is way to process data mimicking what you’ll expect from a turing machines for automaton. I mixed those two ideas at some points into Tamagoso. From “Tamago” meaning egg in japanese, and an obvious reference to 90’s game; and from “so” meaning layer, though there’s an union on the o and I’m not sure of the prununciation)
The idea is then to use an UI based on Tamagotchi design to monitor the agent health, as well as instructing it from different media sources and to apply different set of rules in various contexts. A growable, pluggable, user friendly machine that could be developed up to competing with each others? Couldn’t we think of bot battles on mathematical proofs if given a mathematical reasoning corpus? Or in a street fighter way if given a game environment ? The idea is still the same; evolving high-level agents for multitasking, but in a community friendly way. Reason behind is my belief that the world is hold by, not one, but multiple truths: some reasoning more or less with the populations. Behind the established facts, everyone has a cohesive approach to their interpretations and none can provide all the keys to interpret those. So should the approach be plural.
That’s why this project might be put along the platform proposition. I think we’re still lacking a lot of tools to get there, and a proper way to handle trained neural nets as standard modules is one of them.
Is AGI Architecture a thing ?
I got to make sense of Solomonoff’s induction recently and, while having no wonder for the thing, I ended up with this wonderful interview of Marvin Minski on the top-down approach to AGI after consideration of this induction principle.
And that still resonate in my mind as I ended up considering, a long time ago, that we were more a middle layer from which both top-down and bottom-up approaches were doomed to building further away from the essence of intelligence, instead of dissecting it.
A top-down approach, if it’s truly feasible, would be the world of architects specialized in AI. As my craziest dream is to become an AGI architect, developing the AI architecture seems to be the right path.
So could we really take an AGI down to a machine capable of finding the smallest turing machine that fits a task? I’m unsure it’s enough; especially as we know perception is relative to evolutions, and some human-based parametrization will be required for AGI. Though that’s probably more describing the limit of what AI can make: this induction determines the length of the simplest way to get a task done. That’s not human clumsiness and risky shortcuts to get to a “good-enough” result.
So AGI architecture still needs better requirements, but AI architecture could emerge.
Also, I see here, here and there transfer learning trending. Maybe there’s a sign architecture of embedded knowledge is gonna be the next step ?
Platform Embedded Spaces – First use case ?
I would love to start with Tamagoso, but it’s so incredibly freaking difficult and I’m still looking at the mathematical spaces and operators, as well as the concrete cases, we could get out of that. Still documenting, still thinking, I need a simpler case to start something that ambitious progressively. I got a lean canvas, a quick look at the technologies, and that’s it; I’m lost and don’t even know where to start my UMLs.
So maybe a case where I can get more into a trial and error approach would be best. I was thinking about some really 101 agent that can have a nice purpose and it seems we could do something of educational value:
A student might miss some parts of the courses and, while it is unclear when teachers are still discussing notions, it snowballs quickly with advanced tasks depending of those missed fundamentals. A better way to detect it would be to seek for patterns in each school student exercises by casting their mistakes from different source materials to different knowledge spaces.
A hierarchy and classification of those spaces can help to create a reading canvas of the overall student performance. From higher generalization features space, we can retrieve a general student pattern over multiple high-level skills, and get down to the more precise weaknesses. That way we can prevent the student from getting into a snowball failure later from missing notions initially, instead of leaving it undetected until the problem gets raised and it’s too late.
This could therefore be a pedagogic tool, but finding dataset and study cases won’t be easy.