Basically standards about representing serialized data and describing meaning to the various block sequences in the format. JSON, the frame language from Stanford. This has been going on for decades even as SQL dominated and still does. Why?
SQL represents data the way the computer understands data, columns names for rows containing 32 bit integers and byte strings, all with a table name. Any more meaning farther out has do be done by humans, outside the computing channel.
But, the ontologies have one thing in common, serialized format, nested order and forward looking grammars. So the great break through is making the computers understand serialized formats. I mean, get a hardware architecture that can spin sqite3 as an ontology engine with variable length rows and automatic pointer updates. Then the machine understands one more thing, the the machine can spin and maintain the serialized from at super rates. Or get embedded with SQLite3 and make serialized formats run naturally deeper in that machine.
The software underlying the web knows one more thing, Shannon encoding, which they call intelligent caching. What is the imposing graph that keeps repeating? Can the machine mechanically perform some algebra on the ontology structures, and compute a much better encoding (caching)? Then you find humans and computers agreeing on structures of directed graphs that represent an idea, they have both captured it.
That means the ontology machines have to talk to each other, they have to directly spin the output on one convolution off to another machine. The g machines will have autonomous bots, cruising the graph looking for meaning.
No comments:
Post a Comment