In olden days, the index in the back of a book was a graph, in nested order, of linked sets of keywords. In recent days, word processing software can automatically construct the semantic index from tagged key words. But my point is, search engines try to discover small portions of semantics that maximally cover an arbitrary set of linked web pages.
I like that the Wiki is group organized, and its semi-formal organization makes it a great subject for web indexing. In the Wiki layout for physics there are groups of experts who debate the hierarchy of the scientific discourse, they make the Wiki hierarchy match how physicists arrange the semantics. It is formal enough that an XSL tree could transform the Wiki pages into linked sets of keywords. If Wiki did this, then any personalized search engine could download, or create semanic graphs of key words for specific sections of the Wikinet.
In the case of the user and the patented Imagisoft Search Gadget, the user could select analyze' for any of the returned search suggestions, causing his personal widget to wun the key word analysis, an store the results in WebSQL, in nested order form. The keyword algorithm is a network tree pruning exercise.
1 comment:
Trey Smith Blog
hi author. Some times its a pain in the ass to read what people wrote but this web site is very user friendly ! .
Post a Comment