Hello Guest, welcome to torrentinvites.org - Your #1 source for Torrent Invites!
CLICK HERE to register for free and gain full access to TI.org!
Torrent Invites! Buy, Trade, Sell Or Find Free Invites, For EVERY Private Tracker! HDBits.org, BTN, PTP, MTV, Empornium, Orpheus, Bibliotik, RED, IPT, TL, PHD etc!
1Likes
-
1
Post By sedna
-
sedna
Guest
Nvidia Makes Breakthrough In Reducing AI Training Time
Nvidia, one of the technology companies heavily invested in artificial intelligence, has revealed a breakthrough in reducing the time it takes to train AI.
In a blog post, the company discussed the rise of unsupervised learning and generative modeling thanks to the use of generative adversarial networks (GANs).
Thus far, the majority of deep learning has relied on supervised learning that gives machines a "human-like object recognition capability." An example for this, Nvidia notes, could be that "supervised learning can do a good job telling the difference between a Corgi and a German Shepherd, and labeled images of both breeds are readily available for training."
However, in order for machines to receive a more “imaginative” capability, like how a wintery scene may look during the summer, a research scientist team headed by Sifei Liu made use of unsupervised learning and generative modeling. To see this in action, Nvidia gave the example seen below, where the winter and sunny scenes on the left are the inputs and the AI's imagined corresponding summer and rainy scenes are displayed on the right.
https://img.purch.com/nvidia-ai-nips...Rpb24tMS5qcGc=
The aforementioned work was made possible by using a pair of GANs with a "shared latent space assumption".
“The use of GANs isn’t novel in unsupervised learning, but the NVIDIA research produced results — with shadows peeking through thick foliage under partly cloudy skies — far ahead of anything seen before,” the company explained.
As well as requiring less labeled data and the related time and effort to create and process it, deep learning experts will be able to implement the technique across several areas.
“For self-driving cars alone, training data could be captured once and then simulated across a variety of virtual conditions: sunny, cloudy, snowy, rainy, nighttime, etc,” Nvidia said.
Another illustration of the use of unsupervised image-to-image translation networks was showcased through a picture showing how a cat could be used to produce images of leopards, cheetahs, and lions.
https://img.purch.com/nvidia-ai-nips...Rpb24tMi5qcGc=
Elsewhere, a paper the company pointed towards is “Learning Affinity via Spatial Propagation Networks,” headed by Liu. The paper includes "theoretical underpinnings of the neural network’s operation along with mathematical proofs of its implementation." Nvidia highlights the speed in particular. The network, which runs on GPUs with the CUDA parallel programming model, is up to 100 times faster than previously possible.
Tags for this Thread
Posting Permissions
- You may not post new threads
- You may not post replies
- You may not post attachments
- You may not edit your posts
-
Forum Rules