Image - IBM scientists at the SUNY Polytechnic Institute Colleges of Nanoscale Science and Engineering’s NanoTech Complex in Albany, NY prepare test wafers with 5nm silicon nanosheet transistors, loaded into the front opening unified pod, or FOUPs, to test an industry-first process of building 5nm transistors using silicon nanosheets

The limits of silicon have not been reached quite yet.

Today, an IBM-led group of researchers have detailed a breakthrough transistor design, one that will enable processors to continue their Moore’s Law march toward smaller, more affordable iterations. Better still? They achieved it not with carbon nanotubes or some other theoretical solution, but with an inventive new process that actually works, and should scale up to the demands of mass manufacturing within several years.

That should also, conveniently enough, be just in time to power the self-driving cars, on-board artificial intelligence, and 5G sensors that comprise the ambitions of nearly every major tech player today—which was no sure thing.

5nm Or Bust

For decades, the semiconductor industry has obsessed over smallness, and for good reason. The more transistors you can squeeze into a chip, the more speed and power efficiency gains you reap, at lower cost. The famed Moore’s Law is simply the observation made by Intel co-founder Gordon Moore, in 1965, that the number of transistors had doubled every year. In 1975, Moore revised that estimate to every two years. While the industry has fallen off of that pace, it still regularly finds ways to shrink.

Doing so has required no shortage of inventiveness. The last major breakthrough came in 2009, when researchers detailed a new type of transistor design called FinFET. The first manufacturing of a FinFET transistor design in 2012 gave the industry a much-needed boost, enabling processors made on a 22-nanometer process. FinFET was a revolutionary step in its own right, and the first major shift in transistor structure in decades. Its key insight was to use a 3-D structure to control electric current, rather than the 2-D “planar” system of years past.

”Fundamentally, FinFET structure is a single rectangle, with the three sides of the structure covered in gates,” says Mukesh Khare, vice president of semiconductor research for IBM Research. Think of the transistor as a switch; applying different voltages to the gate turns the transistor “on” or “off.” Having three sides surrounded by gates maximizes the amount of current flowing in the “on” state, for performance gains, and minimizes the amount of leakage in the “off” state, which improves efficiency.

But just five years later, those gains already threaten to run dry. “The problem with FinFET is it’s running out of steam,” says Dan Hutcheson, CEO of VLSI Research, which focuses on semiconductor manufacturing. While FinFET underpins today’s bleeding-edge 10nm process chips, and should be sufficient for 7nm as well, the fun stops there. “Around 5nm, in order to keep the scaling and transistor working, we need to move to a different structure,” Hutcheson says.

Enter IBM. Rather than FinFET’s vertical fin structure, the company—along with research partners GlobalFoundries and Samsung—has gone horizontal, layering silicon nanosheets in a way that effectively results in a fourth gate.

Image - A scan of IBM Research Alliance’s 5nm transistor, built using an industry-first process to stack silicon nanosheets as the device structure.

“You can imagine that FinFET is now turned sideways, and stacked on top of each other,” says Khare. For a sense of scale, in this architecture electrical signals pass through a switch that’s the width of two or three strands of DNA.

“It’s a big development,” says Hutcheson. “If I can make the transistor smaller, I get more transistors in the same area, which means I get more compute power in the same area.” In this case, that number leaps from 20 billion transistors in a 7nm process to 30 billion on a 5nm process, fingernail-sized chip. IBM pegs the gains at either 40 percent better performance at the same power, or 75 percent reduction in power at the same efficiency.

Just in Time

The timing couldn’t be better.

Actual processors built off of this new structure aren’t expected to hit the market until 2019 at the earliest. But that roughly lines up with industry estimates for broader adoption of everything from self-driving cars to 5G, innovations that can’t scale without a functional 5nm process in place.

“The world’s sitting on this stuff, artificial intelligence, self-driving cars. They’re all highly dependent on more efficient computing power. That only comes from this type of technology,” says Hutcheson. “Without this, we stop.”

Take self-driving cars as a specific example. They may work well enough today, but they also require tens of thousands of dollars worth of chips to function, an impractical added cost for a mainstream product. A 5nm process drives those expenses way down. Think, too, of always-on IoT sensors that will collect constant streams of data in a 5G world. Or more practically, think of smartphones that can last two or three days on a charge rather than one, with roughly the same-sized battery. And that’s before you hit the categories that no one’s even thought of yet.

“The economic value that Moore’s Law generates is unquestionable. That’s where innovations such as this one come into play, to extend scaling not by traditional ways but coming up with innovative structures,” says Khare.

Widespread adoption of many of those technologies is still years away. And success in all of them will require a confluence of both technological and regulatory progress. At least when they get there, though, the tiny chips that make it all work will be right there waiting for them.




Wired