You CANNOT Define Semantic

I couldn’t find a way to approach a definition of the semantic of a sentence.

It is like the holes in cheese; it can’t be fully described by its essence and, even if the cheese is not important to what’s inside the holes, you lose your holes if you lose your cheese.

I couldn’t define Semantic properly, but I could find a lot of “iso-“semantic examples. I mean “you can change the sentence but not the meaning” cases. It can be actually done easily and at different level of abstraction:

  • Alternatives spelling and grammars (character level)
  • Synonyms (word level)
  • Sentence and expressions in a context (sentence level)
  • A more global context (that is blurry for me there)
  • Character nuance (you can stylize character without altering the semantic)

So it tells you that different sentences can convey the same meaning. There you get that defining semantic is like removing the cheese to better see the holes. Though, you need to finely cut around; otherwise you get nothing to talk about.

[edit 2 weeks later] Actually, you somehow could define semantic; but you need to do it on top of some mechanism or machine, in a transitive way. Like, for instance, you can define a programming language to encompass the syntax of some commands. Then you need to extract the Abstract Syntax Tree and, finally, match the tree structure with the commands of a machine that fills in those commands (let’s say a Petri net, because those are great, but there are much more modern and complex alternatives). That way, you extracted the semantic of a programming code, but the semantic is bounded to the underlying machine domain. Meaning your machine has the same limitations as a trained neural nets: it only interprets sentences according to what states it has registered in it. The other way out would be through combinatorics?

So…. that question is surely not a case closed with such a small blog post. The question might be stated more like “Could we define a non-projective semantic ?” (meaning; a semantic formalization that exists independently from any implementation, system or machine)
As this blog target is the pair human brain/natural language, this question might be able to tell us if we can simulate a machine that understands like humans without having the need to simulate a human brain and human education.


[edit 4 weeks later] Seeing the absence of reactions regarding this post, I might have got something wrong or misunderstood