Monday, October 24, 2016

Simple traversal of compact graphs

def traverse(self,id) :
if self.count = 1 : return(id)
i = 0
while i < self.count : i = GET(id).traverse(id+i)

This is not tested,  The algorithm is a complete traversal of the graph, basically visiting each location in an array of nodes.  As the algorithm traverses, it accumulates the index into the array by accumulating the block counts of any node. GET is to be decided, but it is a call to the iterator to get the node pointer at index ID.  Now, the structure of this thing forces compact graph because it uses enclosed counts, the if statement enforces an end point  Or, to put is more bluntly, the insert and append function on this array will be much more complex to make this descent so simple.

But the simplicity of it is also the code verification, it gets a complete traversal with the minimal exposure of any node to variables other than local variables linked within its enclosed sub block. We limit the bots this this patyh and we get a ton of protection from malignant bots.

Making the trading bot operational

I have to separate graph theory from the operational point of view, so let's go with the operational.

Traders send a request to the trading site, the trader wants to 'rent' and operator and run the graph.  The trader gets, in return, one or more graphs, including possible the singleton graph, a chunk of cash. What kind of graph?  What does the internal betting graph look like when viewed as a 6 bit graph, how much compression, give me back a 6 bit structural model of the traders.  This is the act of insider revelation, you are getting info that is rare, costly and protected by the Smart Cards.

The trader can also rent the trading bot, take some pure cash and find a bin.  Once rented,that bot does not return until mark to probability, about a second, and your bet is completed.

The big graph is the stuctural model of very wealthy people and institutions. We will have analysis tools, we an spot Soros when he shorts.  But Soros has no choices any more, he is bound by the honesty of smart card, the money Soros gotta go there.

Mathematically, we have graph convolution, out = g1 convolve  g2 follows the protected bot network,   We can get linearity here, everything is priced.  That means the colvolution  operator can be complete, it cover all the extremes, and prices them Look at the return graph, skewed like hell? Someone will make buck when that graph renormalizes..

This is precisely what chart analysis does, it is looking for structural changes. But what we can do today, we can actually get he structure for you.

Watch your voting machines

A 2006 classified U.S. diplomatic cable obtained and released by WikiLeaks reveals the extent to which Smartmatic may have played a hand in rigging the 2004 Venezuelan recall election under a section titled "A Shadow of Fraud." The memo stated that "Smartmatic Corporation is a riddle both in ownership and operation, complicated by the fact that its machines have overseen several landslide (and contested) victories by President Hugo Chavez and his supporters."
"The Smartmatic machines used in Venezuela are widely suspected of, though never proven conclusively to be, susceptible to fraud," the memo continued. "The Venezuelan opposition is convinced that the Smartmatic machines robbed them of victory in the August 2004 referendum. Since then, there have been at least eight statistical analyses performed on the referendum results."
Related Content

Soros-Linked Group Launches Putin-Trump Conspiracy SiteThe Left deploys hammer and sickle to push false narrative of Trump-Russian connection
"One study obtained the data log from the CANTV network and supposedly proved that the Smartmatic machines were bi-directional and in fact showed irregularities in how they reported their results to the CNE central server during the referendum," it read.
In another section titled "At Least Corruption," the author of the memo wrote that even if "Smartmatic can escape the fraud allegation, there is still a corruption question."
Smartmatic had claimed it provided machines to Arizona, California, Colorado, Washington, D.C., Florida, Illinois, Louisiana, Michigan, Missouri, New Jersey, Nevada, Oregon, Pennsylvania, Virginia, Washington, and Wisconsin — it has since pulled that information off its website.

Pontifical peninsular politicians porting impaired pride pour out impracticalities

Guardian: Britain’s biggest banks are preparing to relocate out of the UK in the first few months of 2017 amid growing fears over the impending Brexit negotiations, while smaller banks are making plans to get out before Christmas.
The dramatic claim is made in the Observer by the chief executive of the British Bankers’ Association, Anthony Browne, who warns “the public and political debate at the moment is taking us in the wrong direction”.
A source close to the Brexit secretary, David Davis, said he and the chancellor,Philip Hammond, had last week sought to offer reassurance that they were determined to secure the status of the City of London.
However, the government’s stated intention to take control of the freedom of movement into the UK is widely recognised among officials to be a hammer blow to any chance of retaining the present terms of trade for banks, particularly given the bellicose rhetoric of major politicians on the continent.
The so-called passporting rights for members of the single market allow UK-based banks to offer financial services to companies and individuals across the EU unimpeded, yet the French president, François Hollande, is among those who have insisted in recent weeks that hard Brexit will mean “hard negotiation” and Britain will need to “pay the price” of leaving.

What about bankers?

They run the trading sites, and the bankers association guarantees the pure cash Smart Card.  In other words, they take the same stand Tim Cook had, no human tampering with bot trades, threy are not observable except by contract with a Smart Card.

Bankers have a tough choice because government central bankers are none too pleased with pure cash.  It is a choice for banks, either the socialist path or come with us.  The technology is arriving.

The Basic idea and how it works on the trading sites

Here is the node class, untested, it will not even compile.  But, it has three methods, add, delete and traverse with some function. Notive it moves whole blocks around the block array..

Now, multiple bots will be traversing the graph, even as the graph becomes  modified, any bot, including the current bots.  Hence, the bots wait their turn to step through the graph,, Each bot leading the graph normalized.  We keep indices that get any bot back to where it was after being kicked off the priority queue.  The indices are updated, but that is not here.   

Bets on the betting graph involve multiple trading bots who want to run the graph, but the traders soon learn to keep the queue small and watch the run time block chain structure for an opening.  Sound familiar? Try node.js, it operates very similar.  But the key difference,the traders take queue size seriously.

Consider a trading site with an ongoing betting graph.  It offers users the option of traversing the current graph and collecting a block structure of it to 6 bits of precision.  So the betting site has pre-announced he betting precision, all bets quantized to 1/(2^5), I think, or 3% precision.

Who guarantees that the trading site is not keeping secrets books to a higher precision? Two things. 1) It ids essential we have the honest smart card so the code at the betting site is verifiable by all our bots. Remember, smart card can do honest functions over the graph, no humans allowed except what is published by contract. You have a guarantee of no cheating.

Let's add detail
You want to make a trade, you have your price and the betting site.  Once your trade goes on the queue, you will run the graph and find the bin that matches your trade.  The trading site bot maintains market liquidity, it always has at risk less liquidity than the market can stand, or else the some sub component of the graph restructures.  Who ciovers the losses and gains from the site market maker? The owner of the smart card verifying the site. The bot liquidity precision may be better than the trader, but it is published,we know before hand what the 'currency' risk is.

class node :
count = 0
link = None
nodes[] = None
def delete() : 
prev =
prev.count = prev.count - self.count
for i in range(self.count)
prev.nodes[i]= prev.nodes[i+self.count] =
def add(next)
prev =
for i in range(self.count)
prev.count = prev.count + self.count
if self.freq < min.freq : min = self
def traverse(n,fun) :
if n == none return(none)
for i in range(n.count)
if fun(n.nodes[i]) : traverse(n.nodes[i],fun)
else : traverse,fun)

Like a byte code is really a Huffman encoding

Simply a minimum Huffman encoding of source code, defined for generic processors.  And a compiler is just that, it is building the expression graph, as a block structure, expanding ll the expression in the source.  Along the way, at any given node, the optimizer notices a frequency of reference, and 'quantizes' that variable to an internal register. It has quite a bit of freedom since today's processors have much internal memory.

Block chain structure and our red/green smart card lights

What is going on, how does the bot know when to warn us?

When a price is difficult to quantize, it does not quite fit your normal bin sizes.  Your bot is maintaining your finite block structure, all the time.  The nature is to minimize the length of the block chain, actually, that is equivalent to minimizing redundancy if per transaction costs are small. So, suddenly making a larger than usual purchase causes the bot to restructure the graph, to include rare but large purchases. The Huffman encode graph becomes tilted, slanted, and it will requantize.

The bot has access to block chain structure from trading sites, the betting graph is visible. A quick 'block structure' compare will convolve your finite graph with the trading site,and look at the resulting block structure, measuring their 'difference'. It will warn you or recommend to you based on the difference. We are gonna do graph algebra? great stuff, an almost singularity.

Compact graph structure part 1 or 2

I use the term a lot.  Compact graphs was the basis for storing JSON graphs into SQLite. It means the graphs are spanning trees without loops and each graph holds a sequence count and is composed of sequence of graphs.

G is a (count,G) or (0,Null)

It means, 1) the Turing structure has no loops, a descent down the graphs leads to a fixed point. and 2) efficient memory retrieval is linear. So, algorithms which maintain the graph by deleting and adding nodes is always deleting and adding graphs and must maintain the block count.  But, the counts add and subtract, node counting algorithms needed.

What else?
The length of a compact graph is a close approximation to the Huffman weighted sum, mainly because we get a near match between the array size and number of bits required to hold the enclosed symbol set.  These concepts can lend itself to code optimization and verification by examining the code as a 'block chain'. Find the characteristics of typical 'block chains', shapes commonly found in byte codes, for example.  You have a generic'graph compare'.

Then we talked about lock structure compare when we talked bot security at the smart card foundry. The foundry is doing a graph compare, how much does the verification block structure compare with the issuance block structure, when generating hidden encryption keys.  New math, I will get a graph add, delete, link, all working for some generic node. Make it so the bot can carry a function long with as it traverses the block chain. Then I make that function the Huffman operator, and it will construct the proper graph out of a linear array of symbol nodes. Ftrom then on its a matter of trying different functions, some functions just take stats of the blockchain, whatever is held in the node value field.

Compact graph structure

I use the term a lot.  Compact graphs was the basis for storing JSON graphs into SQLite. It means the graphs are spanning trees without loops and each graph holds a sequence count and is composed of sequence of graphs.

G is a (count,G) or (0,Null)

It means, 1) the Turing structure has no loops, a descent down the graphs leads to a fixed point. and 2) efficient memory retrieval is linear. So, algorithms which maintain the graph by deleting and adding nodes is always deleting and adding graphs and must maintain the block count.  But, the counts add and subtract, node counting algorithms needed.

Now, when your bot is traversing the graph the nexy node algorithm becomes:

def traverse(g):
  for i in range(g.count) :
    if function(g.nodes[i] :
else :

I doubt this is exact, I wrote it here on the fly.   Now all of g is a linear array so the nodes list of any g is simply a structure pointer to the array below.

But notice the difference, We do not decide to go left or right, instead, the default i to always go down the array unless a condition requires following a link.  The links should always be local, they simply jump down the array to some point within the 'count' of nodes. But this requires care when altering the graph, which the bot does on the fly.

Hence, node delete simply dumps the enclosed block of nodes, and shifts everyone. Enclosed counts, being nested, are self contained blocks.

Do this when you have many retrievals and fewer updates. You spend the time maintaining block counts on update or delete, but access is a zip once the counts are maintained.

In the Huffman encoder, the nodes contain symbol value and frequency, the graph becomes organized to minimize path length. You compare the user symbol to the symbol at each node, descending the graph and accumulating the code word. The adaptive Huffman holds the typical value, and its recent frequency. A comparison is not exact, a user symbol will be tagged to a node if it is within precision of the typical value,

Hence,the difference between the adaptive and complete Huffman. The adaptive needs a local linearity in the symbols, it needs a price, because its precision is finite.

What about block chain?

Look, a complete block chain can be organized as a compact graph.  That nmeans it is Turing complete, and is a finite spanning tree with no loops.  Hence here exists a symbol set hat represents the original sequence that built the graph.  That symbol sequence is, something like order of the transaction, its index, I think. Whatever the value used i exact comparison,that sequence exists and can be derived backwards, according to my proof, mainly based on the fixed point structure of compact graph. How about that math!

For AI this means that if the AI bot can derive some locally linearity on the nodes,then it can find the theory of the nodes,what is the dual structure (its approximate expression graph,  its grammar)  that must have created the compact graph.

I have yo do this traversal method to test the Redneck debugger, it is another of its necessary tests. So I will get this basic snippet going in a day or two, post the method and go on from there.