Thursday, November 10, 2011

he inate grammar of the G machine

In the land of G ontologies we can define some Version 1.of rules of grammar for the universal G machine. Starting:
The G machine performs the graph convolution on any two subgraphs of G, convolve(sungraph1(G),sungraph2(G))
Let;s simplify terminology. convolution is the TE character @, an X,Y are subgraphs of G. G is th single vriable i the lad og=f G, it is a descending ontology graph of all known text with no loops, in Version 1.0.  Mechanically, in the Internet this macn all G has attachments points to G machines which are uuique identified by their URL pointers.  Hence, All G is travserable.
))
The operator, X = subgraph(G) produces a graph with valid G grammar.

So, rule 1, convolution: @(X,Y) = @(Y,X), by definition, the G machine is symetrical with respect to X and Y.
Rule 2: Distribution should hold: @((X,Y),Z) = @(X,Y),@(Y,Z) )


How does all this happen when X and Y are attached to different G machines.  Well, the links are transparent to the grammar, so a link that is a URL jump, is a flavor of link, transparent to the grammar, for which is is simply a link.   How does the G machine deal with a URL flavored link?
@(X,LongJump:Y) =  LongJump:@(X,Y)  It packs up X and ships it to G machine at LongJump.  At LongJump, G machine keeps the ascendant link on Gout and return the resultant graph, by custom.

OK, making terminology simler, let me use some brackets: What about:

@([count=0.(k1,k2,k3).(count++)],LongJump:Dictionary)
OK what gets sent to LongJump, and what happens to the variable count?  In this case, since @(X,Y) = @(Y,X), the G machne will always act as if the LongJump os the first thing seen, and it will package up the entire search string and ship it off.  Where are the variables count maintained?  This here patents fly in the software business, but the solution has been worked out.

In the world of G, version 1.0, the subgraph X in @(X,y) is unique, and has only one instance.  So regardless of how G machines pass around subgraphs components of X or Y, variable in those subgraphs will appear consistent as distinct nodes on G.  Hence, why I emphasis version 1.0, version 1.0 does not support parallelism, and I have deferred the problem of multiple, simulaneous  traversal with X  simultaneously.  The G macvhine will simply shiop the trailing portion of X when Y  has a long jmp.  The remote machine will return Gout and the residual of X with complete variable updates.

In version 2.0, when parallelism is allowed, we will have aggregation funcion to aggregate the returned vbalues of the same variable, duplicated. Consider:

@( [k1,k2,k3].count++,[LongJump1,LongJump2] )
lacking parallelism, the G machine executes in sequence:
LongJump1:@([k1,k2,k3].count++,*),LongJump2:@([k1,k2,k3].count++,*)
here I use the wild card * to replace what is left of Y after the original convolution is broken.  The actual wild card it inserts will be subject wo wild card grammar, a simple but separate subject.

So, there you have it, the fundamentalk rules of the G machine. It knows about the distributive property of the grammar and can work that using the G graph object model.

No comments: