Intel Stops The Tick-Tock Clock

Although Intel has been producing chips based on the tick-tock pace for roughly a decade now, the last several ticks and tocks have not gone quite according to plan. The system began to break down after the Ivy Bridge tick. Ivy Bridge went off the beaten path a bit by bringing out a significantly improved iGPU architecture and a moderate improvement to CPU performance over the Sandy Bridge tock, which Intel referred to as a “tick+.”

via Intel Stops The Tick-Tock Clock

Consider this part 2 of a 3 part series looking at desktop computer technologies. Last week I wrote in detail about the latest Samsung product announcements in their SSD product line (specifically M2 form factor storage). My conclusion then was there’s a sort of “topping out” occurring slowly, spontaneously between different key industries that all funnel into computer manufacturing. Last week storage, today it’s CPU.

The big notice was all the tech news sites simultaneously took Intel’s latest product announcements and turned them into an interesting “story”. Timing couldn’t have more fortuitous as former head of Intel, Andy Grove, passed away almost simultaneous to the story coming out. The importance of this cannot be overstated as Intel has controlled the narrative continuously even before their first gen CPU was being market the world over. Intel’s brain trust at the time they left to Fairchild to form Intel realized a trend in manufacturing. Gordon Moore is credited with giving it words, but everyone in Silicon Valley could “sense” or see it themselves. The rate at which designers/manufacturers were improving their product was correlated directly with the power requirements and the ruling size of the transistors on the wafers. Each new generation of design rule made the features smaller. The byproduct of that was the same devices (capacitor, transistor, gate) would use less power but could also run at a higher clock rate. Higher clocks mean faster data moving around for a slight increase in price. The price difference was due to the re-equipping of manufacturing lines to handle the re-designed wafers. Other process improvements included using larger wafers to hold all the die needing to be processed. Wafers went from 1″, 2″,4″,6″,8″,12″ and each time a new gen wafer was adopted, everyone retooled their production lines. Prices continued to increase for the first gen of the new chips, but eventually would fall in price as R&D was plowed into making the next gen chip.

Moore’s Law as it became known was that every 6 months or so, components on a chip would decrease in size and run faster. Intel believed it and codified it into their schedules from 1971 onward and there’s even a graph showing how close they came to sticking with that, at least until the last year or two. That’s when bumps started to form and the chip rulings got closer to the minuscule 20-14nm feature size. For years everyone knew that as different mixes of chip manufacturing and processes occurred that CMOS (which didn’t really become king until mid to late 1970s) would hit a wall. No amount of physics or electrical engineering expertise, or material engineering could get around the limits of electrons moving through smaller and smaller features on a silicon chip. You can shrink stuff down infinitely but the electrons will stay the same size and be less predictable than they would at larger ruling sizes. Losses of energy through the gates or the oxide layers required more tricks (Finfet designs for gates, Silicon-on-insulator for oxide). At one time copper was the only way to keep things going as things got smaller. Lots of R&D was spent trying to find more reliability in light sources for exposing the photo-litho masks used to etch features into the silicon wafers. Talk of Extreme UV or X-ray light sources, phase-shift masks, a lot of science but none of it could stem the tide of  keeping the train rolling forever.

So now even Intel’s CPU vaunted schedule for improvements is leveling out too. Not unlike the wall Samsung is hitting with it’s NAND chips in SSDs. Things are slowing down and hitting an even keel. Improvements in chips will be spread out ever widening time periods over a larger family of products that are more similar to the previous generation. You may not see 10% improvement in any aspect of a new chip on its release. No 10% power reduction, or speed improvements, or feature size shrinkage here. We’ll be lucky to get 2-3% changes in any one of those critical design aspects we always associated with Moore’s Law? The best we can hope for is gradual price reductions and possibly greater reliability over time, maybe. But 5Ghz clocks, 80 core cpus, 1 nanometer feature sizes are NOT going to happen. In all the articles written last week the only room for improvement stated categorically was to dump CMOS (complimentary metal oxide semiconductors). Instead going for a more expensive, exotic material (Indium Gallium Arsenide) is the suggested logical way forward. These mixes of metals have been used for military work since the 1950s in high performance gear, or radiation hardened chips. Until now, there was no justification to commercialize these technologies for the consumer market. But how is Intel going to pay it’s bills unless it can charge more for each new generation CPU? It will be interesting, but I suspect the same as I wrote last week, we’re seeing a long slow flattening and plateau of the desktop computer as we know it. Storage and CPU have hit peaks. I dare say we’re close to that too in one last remaining technology in the Desktop  computer: The GPU (stay tuned for that next week).

Advertisement

Posted

in

by

Tags:

%d bloggers like this: