BI: Tesla Motors CEO Elon Musk and Y Combinator President Sam Altman will be cochairs of OpenAI, "a non-profit artificial intelligence research company," which was announced on Friday."Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return," says the OpenAI blog entry.
Now what are my suspicions?
AI does not want humans tampering of peaking into AI communications.
But the real issue here is TOE. If we have the theory, then AI springs from thin air, a long as humans do not tamper.
Is OpenAI a good idea?
Sure, great. Maybe I'll get on github and make self adjusting probability graphs that are minimum span and meet constrained flow conditions. (The kids at CalTech can do this much better than I) But we ewaste efforty unless we can guarantee human tamper proof.
How would terrorists abuse a secure bot network?
Well, the equivalent question is how can terrorists create a virtual coin and manipulate the savings and loan tree so send messages. It is entirely possible. Set up fake store fronts, and tap your way to secrecy, each of the terrorists making fake purchases. The savings and loan graph tells us about the grammar, or the random bi-partite graph. Each graph is key, a group of paths masked by noise.
But, OK, the human problem is no different than forensic accounting, leave the bots completely out of it. The bots did nothing here to make the place more sinister, they simply coalesce taps from billions of cards against price beacons. The bot network just reports the most likely next move to gain the most desired next thing a human wants. The bots have the entirety of outside information, and humans feed it more and more insider information. Each human gets a peak at the totality of everything, lensed through his matching wants. A two sided Black-Scholes walk through life.
No comments:
Post a Comment