How about that! I am making a standard, I'll implement the basic types, and that all I want. What am I doing?
Well, the market is semantic,predicate,object. Well that predicate decomposes into the graph layer byte, and the Bson byte, but the graph layer manages he array/document switch; because that switch forms the basis of graph. Bson defies the rest. The SQL hold the Bson data in the key. So I take the link, which bson and graph share, map that into the type, take a strlen from sql on the key. The machine still has not touched key! But the translator now has the Bson type, use that in a case statement to get the strlen of bytes from key to Bson output stream. 40 lines of code, tops.
What's missing? There should be clear way to compute the bounded document byte count, and keep accurate byte counts all the through the chain. In other words, the machine has to reconstruct the entire document from sql so it can have the byte count in the top field. No streaming. Hmmm.. I am ok with that, if we send 100 kbytes of graph, it'll fit.
No comments:
Post a Comment