David Kosslyn and Ian Thompson are the founders of a virtual reality company called Angle Technologies. Two years into this stealth project, backed by $8 million in funding, they won’t say much about the virtual world they’re building—at least not publicly. But they will say that they’re building it in a way that alters the relationship between computer hardware and software. When a PC or a game console runs this virtual world, the GPU chips play an unexpectedly large role, taking so much of the burden off the main processor.

GPU is short for graphics processing unit. These chips were originally designed as a better way of rendering graphics for games and other software. And they still play this all-important role for the virtual world under development at Angle. But that’s not all. Kosslyn and Thompson are shifting countless other tasks onto these chips, just because GPUs are so good at running so many calculations in parallel. A single machine can hold hundreds of GPU chips, and each chip can operate largely on its own. Juggling hundreds of digital tasks at the same time, Kosslyn and Thompson say, these chips can get you far closer to the illusion that you’re standing in an alternate universe.

When they first started building their virtual world, Kosslyn and Thompson pushed most of those tasks through the CPU, a computer’s main processor. But that just didn’t work. As Kosslyn explains, they used the CPU to create new objects as you moved through this world—trees, bushes, rocks, and whatever else they needed to mimic reality. Loading each object into the chip took about one-fifth of a millisecond—far too long, considering just how many objects they needed to load.

“If you scale that up to 10 million objects, you’re suddenly crushed by the order of magnitude. Just loading the world is incredibly slow,” Kosslyn says. “We thought: ‘Man, if only we had a parallel way of doing all these pretty much independent operations really quickly.’ And, well, that’s exactly what the GPU is really good at.”

Their approach is part of a much wider shift across the world of hardware and software. For decades, the processing power available from individual computer chips increased every 18 months or so, according to the oft-quoted Moore’s Law. But in recent years, this trend has begun to slow, even as modern software applications demanded far more processing power than ever before. “Some people call it the end of Moore’s Law,” says Google chip engineer Norm Jouppi. “I like to call it the retirement of Moore’s Law, because it’s not totally dead yet but it’s definitely not working as hard.” The result: Companies and coders are now moving workloads off the main CPU and onto a wide range of alternative processors. If they can’t get enough processing power from a single chip, they need many.

These changes have already pushed through the massive data centers that underpin the likes of Google, Facebook, Microsoft, and Amazon. Because their sweeping online services can no longer handle all tasks on CPUs alone, these companies are moving major processing loads onto GPUs, programmable chips called FPGAs, and even custom-built chips, such as the AI processor that Jouppi helped design at Google. Neural networks and other forms of AI are often the driving forces behind this shift.

The shift is so big, it’s sending ripples across the worldwide chip market. The fortunes of nvidia, the world’s largest GPU marker, are on the upswing. And Intel, which isn’t a big player in GPUs, has spent billions acquiring companies that make FPGAs and various AI chips.

Now, as Kosslyn and Thompson show, the trend is also moving to the other end of the internet, pushing onto PCs and game consoles. Neural networks are helping to drive the change here as well, but in the long run, VR may be the bigger driving force. With virtual reality, so much of the processing must happen on the client, not in the data center. VR must operate in real-time. It can’t spare the added milliseconds needed to send its calculations over the wire. That spoils the effect. And by the same token, it can’t spare the added milliseconds needed to shove all calculations through the client’s main processor. “There are tasks that are completely intractable on the CPU,” Kosslyn says.

For their virtual world, Kosslyn and Thompson are working to leverage all the extra GPU processing power available on high-end PCs and game consoles. “We’re mostly trying to fill in those few milliseconds when GPUs can do work besides rendering graphics,” Thompson says. But in the long run, as virtual worlds become more popular—and even more complex—this could push hardware sellers to pack even more GPU power into PCs and consoles. Device makers may also push GPUs or other alternative chips into the VR headsets, so that they can operate without help from PCs and consolers. And the trend could reach smartphones, which are often used in lieu of dedicated headsets.

Zvi Greenstein, general manager for consumer VR at nVidia, says the company views virtual reality as an enormous market opportunity. Kosslyn and Thompson warn that the tools for building code that runs on GPUs is still a little rough around the edges, but companies like nVidia are working to change that.

Frank Soqui, the general manager of Intel’s virtual reality and gaming group, questions how many VR developers will really transfer workloads onto GPUs. But keep in mind that GPUs are mostly the domain of nVidia, one of Intel’s primary competitors. Nonetheless, Soqui does believe that the market will continue to shift towards alternative processors. Intel recently acquired a startup called Movidius, which makes specialized chips for devices that can sense the world around them—gizmos like robots and driverless cars. But Soqui says Movidius chips can also help VR headsets process what’s happening in the real world, such as in where you head in pointing.

Meanwhile, Microsoft has already build a specialized processor for its Hololens augmented reality headset to help the device keep track of your movements, among other things. In the end, this is yet another example of computing tasks shiftings off the CPU and onto something else.

The upshot? Developers like Kosslyn and Thompson must think very differently about how these build their software. But that’s just one consequence of this change. It also means yet another shift in the global chip market, even further towards GPUs and other alternative processors. And, most notably, it means that virtual reality will evolve very differently than it would if coders kept most tasks on the CPU. VR will get much better, much faster.

source: https://www.wired.com/2017/04/chip-r...-sooner-think/