Now we get to the fun part. Using one neural network is really great for learning patterns; using two is really great for creating them. Welcome to the magical, terrifying world of generative adversarial networks, or GANs.
GANs are having a bit of a cultural moment. They are responsible for the first piece of AI-generated artwork sold at Christie’s, as well as the category of fake digital images known as “deepfakes.”
Their secret lies in the way two neural networks work together—or rather, against each other. You start by feeding both neural networks a whole lot of training data and give each one a separate task. The first network, known as the generator, must produce artificial outputs, like handwriting, videos, or voices, by looking at the training examples and trying to mimic them. The second, known as the discriminator, then determines whether the outputs are real by comparing each one with the same training examples.
Each time the discriminator successfully rejects the generator’s output, the generator goes back to try again. To borrow a metaphor from my colleague Martin Giles, the process “mimics the back-and-forth between a picture forger and an art detective who repeatedly try to outwit one another.” Eventually, the discriminator can’t tell the difference between the output and training examples. In other words, the mimicry is indistinguishable from reality.
The two bots, one generating samples and the other selecting them are both trained from the same set. oon hey learn, the one generates patterns the other accepts, we get mimicry, new creations. The system can work by putting someones image over someone else's, make the combination.
But, this is what we do in S&L matching, except we use Huffman networks and match the resulting graphs, we have a kind of graph calculus. The depositors and borrowers force each other to innovate, they mutual hedge all the known sweet spots, else the pit boss gets rich. We call this sphere packing.
No comments:
Post a Comment