The poison removes that part of the brain related to human interaction, a core golf ball size but extremely important part of the mid-brain. They can never be cured, only controlled. Recovery rates are less than 5%.
The national cost?
Under Obamacare, the 2 million meth vampires in the nation will cost 1 trillion dollars. These disabled souls will wander around the nation, being shoved from place to place as communities arm themselves against threat. Every community in America will have to have emergency anti-meth, police squads, with cattle prods, ready to9 move these folks down the road.
Gay Pride?
San Francisco is known as the birth place of the national menace. Everywhere in America that gays congregate, 40% of them will be the walking dead. If we want to crush the menace, every citizens knows it is maintained where er gay congregate. Why are gays so proud of the fact that half of them are brain dead?
The Marin County Mafia
Ever wonder why SF politicians are so bizarre, like Gavin Newsome or Pelosi? 20% of their voters are brain dead. SF is rejecting thei45 vampire c lass, they have promised then 100 billion in health care which SF does not have, so they shove the problem down to Fresno Ca.
Associated costs
Meth vampires never pay rent. They cause 50% of the crime in America. They have killed entire families, engaged in mass murder, are responsible for 25,000 dead Mexicans. They steal copper from the street lights here in Fresno, they spread the disease, deliberately targeting young kids, cat hing them in a moment of confusing, then promptly amputate the most important part of the brain.
Fresno CA
The Mexican drug cartels and Pelosi's vampire squads are converging on the city. Already ha have had to allocate 6.5 milli9on in emergency funds. The county sheriff is overwhelmed with eviction warrants, and the real estate industry is shutting down much of the rentals in areas where meth vampires congregate.
Parents
If you are not aware of this threat, then your children are doomed.
Monday, April 30, 2012
Friday, April 27, 2012
The Json Join Kernel
I haven't posted anything. Currently I have worked the issue regarding saving a fetched object for multiple fetches later during a graph join.. The problem was the kernel fetching a net object thousands of times, not a problem in the lab,but it needed to be handled. So I keep Json trees around in memory, as a data base, and they have their own cursor context. The left me pondering name spaces for some time, and how much do I want to upgrade the local symbol table. So mostly pondering.
No luck on getting ISP service. Fresno is too chaotic at the moment, in the middle of a statewide restructuring. We are being over run by out of town meth heads, and recently jail released meth heads. Thus happening right when we are on the edge of bankruptcy. Asa a result, the ISP providers fear churn, and I fear it too. So me and the ISP providers have an agreement, do nothing in Fresno until the politicians sort out the welfare mess they created.
No luck on getting ISP service. Fresno is too chaotic at the moment, in the middle of a statewide restructuring. We are being over run by out of town meth heads, and recently jail released meth heads. Thus happening right when we are on the edge of bankruptcy. Asa a result, the ISP providers fear churn, and I fear it too. So me and the ISP providers have an agreement, do nothing in Fresno until the politicians sort out the welfare mess they created.
Tuesday, April 24, 2012
Software uploads soon
My complete json left join is about complete at 230 lines of code.
My json object model:
result:{result graphs.left graph,right graph}
That object model will run straight through the left join routine (20 lines) and execute forms like:
Get html output from "search words" over a search graph. html:@"search words",SearchGraph
I will post the code soon, still having network problems.
Other news. Fresno is in a battle for its life on tyhe meth issue, being overrun by the zombies. Dangerous.
My json object model:
result:{result graphs.left graph,right graph}
That object model will run straight through the left join routine (20 lines) and execute forms like:
Get html output from "search words" over a search graph. html:@"search words",SearchGraph
I will post the code soon, still having network problems.
Other news. Fresno is in a battle for its life on tyhe meth issue, being overrun by the zombies. Dangerous.
Monday, April 23, 2012
What about my Internet connection?
The ISP model is all screwed up at the moment, Comcast cannot deliver. I have cut all my connections to copper! I get more software done that was
A json join
Result = Left join Right
When Result, Left and Right are parseded Json expression trees, then that code is 20 lines. The entire left join module including the Ugliy handlers is less than 300 lines of code. That is all the code needed to execute complex queries over the web.
The complexity is in the underlying machines that get and put Json elements. I embody that complexity in the Mach type (changing the name from Graph). A Mach performs operations on nested stored like, Append etc; using:
mach->exec(Mach * mach,int MethodId,Element * data);
So, that method call leads to the complexity, not the engine itself. I have the adapter for sqlite3 working great.
I also have this working:
The left join starts with:
*@*,* // join the wild cards and write the result to wild card (loop forever)
Then goes to:
parse@,console,*
The left join chainging point on the world virtual json nested stores. The parse machine inherits the machine underneath is, using the inherited methods to emit parsed Json.
The Json left join, great model, the industry will adapt it.
When Result, Left and Right are parseded Json expression trees, then that code is 20 lines. The entire left join module including the Ugliy handlers is less than 300 lines of code. That is all the code needed to execute complex queries over the web.
The complexity is in the underlying machines that get and put Json elements. I embody that complexity in the Mach type (changing the name from Graph). A Mach performs operations on nested stored like, Append etc; using:
mach->exec(Mach * mach,int MethodId,Element * data);
So, that method call leads to the complexity, not the engine itself. I have the adapter for sqlite3 working great.
I also have this working:
int main() { Mach mem; Element nodes[100]; mem.db = nodes; MemInit(&mem); }Now the Json engine will treat by array of elements like any other Json expression tree. (Using the @ alternative)
The left join starts with:
*@*,* // join the wild cards and write the result to wild card (loop forever)
Then goes to:
parse@,console,*
The left join chainging point on the world virtual json nested stores. The parse machine inherits the machine underneath is, using the inherited methods to emit parsed Json.
The Json left join, great model, the industry will adapt it.
Friday, April 20, 2012
Json engine, up and running with Sqlite3!
What a breeze to program! The model of a generalized Json join was brilliant, clears everything up. I have netio working, sqlite3 working, memory data base working.
But still offline so I cannot post code!
But still offline so I cannot post code!
Wednesday, April 18, 2012
My model of the NoSql Json database engine
As my readers know, I am moving from the lab beta to production model of the open architecture semantic processor. I move ahead because Moore's Law dictates that we can execute Json expression trees right out of database. I like Berkeley DB, but I am sticking with Sqlite3. But the interface between Json grammar is clean, 30 lines of code actually. Let us talk about the new programming interface:
Here is Hello World.
And the Typeface equivalent:
Console@"Hello",*
The central interface object is the Graph, under the new semantics. The graph is the link between the cursor and the installed database. The graph has the exec method and a nested cursor list.
A cursor is built on the row pointer in each element:
The file out method:
The Yes database is real, is makes the join seem act like two initialized counters.
Here is Hello World.
#include "machine.h" const Key helloKey={4,"hello"}; Element helloElement = {",",1,hello}; Join j = InitJoin; int main() { Graph console; Graph mem; InitMem(&graph,0); graph.exec(&graph,Append,&helloElement); InitConsole(&console,0); join.result=console.cursors; join.right=mem.cursors; left_join();}
And the Typeface equivalent:
Console@"Hello",*
The central interface object is the Graph, under the new semantics. The graph is the link between the cursor and the installed database. The graph has the exec method and a nested cursor list.
A cursor is built on the row pointer in each element:
typedef struct RowSequenc {int row;int total;int offset} RowSequence; Then we get the cursor: typedef struct {RowSequence rdx;Graph * graph;} Cursor; //and the Join typedef struct { Cursor result; Cursor left; Cursor right;} Join;So everything runs as if by cursors crawling exo skeletal expression graphs.
The file out method:
#define EX(obj,method,e) obj->exec(obj,method,&e) int file_out(*) { Element n;Graph * obj,file;Cursor * c; obj = join.left->graph; EX(obj,StepFetch,n); InitFile(&file,n.key.bytes); c = join.result; join.result=file.cursors; left_joint(); join.result=c;}The whole principle is that stack operation have been replaced by graph crawling. Here is open an sql table:
Graph sql; InitSql(&sql,"TableName"); Then tryp: join.left=sql.cursors; left_join();// So based on that and on my parser, I have built a simple Json engine, well simple is a modest description, here it is:
int left_join() { while(left.cursors->row < left.cursors.total) while(right.cursors->row < left.cursors.total) do_left_operator(); }Everything else is very small snippets of code that obey Lazy J match,pass and collect grammar for nested stores. All the Sqlite3 code went into the sqlite3 adapter layer, and is almost working again. MyJson unit comes complete with the memory data base. I have also defined the 9 lines of code Yes database which does: *@*,* // Agree with anything!
The Yes database is real, is makes the join seem act like two initialized counters.
Wednesday, April 11, 2012
NoSql report
OK, I am still off line. ATT is gone and Comcast refuses to deliver the modem we ordered. I have concluded that both companies want me to get into my unregistered truck and drive across town to pay my bill or order a modem at their little store. It is the Apple, handheld wireless thing is causing hysteria among the ISP CEOs. I might just call Comcast and cancel, buying a handheld with a good wireless plan is making more sense.
On the NoSql code, I decided we really don't need an official source code, we just need the simple interface beween JSON and the underlying dbase engine. So, I will publish the interface I am using (when I am on line). A single page interface should be enough for the industry, as we are not really writing new software so much as providing a thin layer to utilize Moore's Law.
End of Report
On the NoSql code, I decided we really don't need an official source code, we just need the simple interface beween JSON and the underlying dbase engine. So, I will publish the interface I am using (when I am on line). A single page interface should be enough for the industry, as we are not really writing new software so much as providing a thin layer to utilize Moore's Law.
End of Report
Monday, April 9, 2012
I wrote software!
While being off line one might think I would be writing lots of software. No, I fight the urge with thousands of video games. But I did work the theory. I looked over the NoSql scene, factored in Moore's Law and I figured the industry was relayering the software. Layers that were previously compiled can now be interpreted, layers interpreted can be executed from database.
So, I split the software, burying the Sqlite3 layer deep, and it has a simple interface:
Then I define two or three required methods, like fetch next child or fetch next sibling. But the installed operators are up to the user, and the system is still set up to manage custom operators, including supporting a bind system.
So now to switch the databases underneath, I just have the new database execute the required methods. So, with about 1/3 a page of software, the Json parser/convolver is layer free.
I immediately wrote a data base, with another third of a page of c code. My database will extract nested stores from a linear array of 100 elements! (In other words, testing the parser/convolver separately is now a breeze.
Then, I implemented two more characters in the ugly set, the Parenthesis and the Dollar. The dollar is a special naming syntax that says, use relative indexing. The parenthesis says, the relative indexing base Element is this one.
Thus, I now have an execution unit that executes compiled JavaScript (minus the arithmetic and conditionals) At this point I have a nearly overflowing one page of c code. So the six weeks of pondering yields one page of c code. But, the catch, executing Javascript execution is now a generalized join, a sub method of general graph convolution. Javascript is nothing but a little cursor that traces the JaavaScript though is named/value nests until it reaches native code. In this case, the native code are the custom and required operators.
What happened to indexing?
The defa8ult is a local symbol table, then find things by scan and match. I will ponder another six weeks and find a half page of code that uses someone elses indexing method.
One more thing, I cut an pasted the Windows dll code, so the system now allows. The Dll methods are tied tyo method names, so in JavaScrip it is:
method_name:[native_code]
Why is this simple? Because the software industry has layered and relayered for 30 years, and the relayering software is getting automated and modular.
So, I split the software, burying the Sqlite3 layer deep, and it has a simple interface:
typedef struct { int (*exec)(int method ,Element *data); void * db_pointer; Void * graph_list; } DB_INTERFACE;// the graph_list is a nested set of DB cursors, really, set up for nested store. Anything in the Json grammar will happen using these graph cursors. Element is the new typedef name for a graph node.
Then I define two or three required methods, like fetch next child or fetch next sibling. But the installed operators are up to the user, and the system is still set up to manage custom operators, including supporting a bind system.
So now to switch the databases underneath, I just have the new database execute the required methods. So, with about 1/3 a page of software, the Json parser/convolver is layer free.
I immediately wrote a data base, with another third of a page of c code. My database will extract nested stores from a linear array of 100 elements! (In other words, testing the parser/convolver separately is now a breeze.
Then, I implemented two more characters in the ugly set, the Parenthesis and the Dollar. The dollar is a special naming syntax that says, use relative indexing. The parenthesis says, the relative indexing base Element is this one.
Thus, I now have an execution unit that executes compiled JavaScript (minus the arithmetic and conditionals) At this point I have a nearly overflowing one page of c code. So the six weeks of pondering yields one page of c code. But, the catch, executing Javascript execution is now a generalized join, a sub method of general graph convolution. Javascript is nothing but a little cursor that traces the JaavaScript though is named/value nests until it reaches native code. In this case, the native code are the custom and required operators.
What happened to indexing?
The defa8ult is a local symbol table, then find things by scan and match. I will ponder another six weeks and find a half page of code that uses someone elses indexing method.
One more thing, I cut an pasted the Windows dll code, so the system now allows. The Dll methods are tied tyo method names, so in JavaScrip it is:
method_name:[native_code]
Why is this simple? Because the software industry has layered and relayered for 30 years, and the relayering software is getting automated and modular.
Tuesday, April 3, 2012
How is the NoSql project going, you ask?
I have done only research. Mainly looking closely at Javascript and BerkeleyDB.
I wrote a white paper, but its a secret. The conclusion of the paper is that I got it exactly right. there will ber a semantic processor based on Sqlite3. Otherwise, mainly I work the theory.
I wrote a white paper, but its a secret. The conclusion of the paper is that I got it exactly right. there will ber a semantic processor based on Sqlite3. Otherwise, mainly I work the theory.
How is the new internet connection going?
I had to gut ties with ATT. So I signed up for Comcast and they are generally about 6 weeks late in delivering equipment. I am using my nephew's computer. Service providers seem to be having a tough time.
Subscribe to:
Posts (Atom)