Resentment, Jealousy, Feuds: A Look at Intel’s Founding Team – Michael S. Malone – Harvard Business Review

English: Michael S. Malone is a U.S. author, a...
English: Michael S. Malone is a U.S. author, a former editor of Forbes magazine and host of a talk show on PBS. Español: Michael S. Malone es un escritor y guionista estadounidense. (Photo credit: Wikipedia)

Just when you think you understand the trio (as I thought I did up until my final interview with Grove) you learn something new that turns everything upside-down. The Intel Trinity must be considered one of the most successful teams in business history, yet it seems to violate all the laws of successful teams.

via Resentment, Jealousy, Feuds: A Look at Intel’s Founding Team – Michael S. Malone – Harvard Business Review.

Agreed, this is a topic near and dear to my heart as I’ve collectively read a number of the stories published over the years from the Tech Press. From Tracy Kidder‘s, Soul of a New Machine, to Fred Brook’s The Miracle Man Month, Steven Levy’s Insanely Great. The story of Xerox PARC as told in Dealer’s of Lightning, the Arpanet Project as told in Where Wizards Stay Up Late. And moving somewhat along those lines, Stewart Brand’s The Media Lab and Howard Rheingold’s Virtual Reality. All of these are studies at some level of organizational theory in the high technology field.

And one thing you find commonly is there’s one charismatic individual that joins up at some point (early or late doesn’t matter) who then brings in a flood of followers and talent that is the kick in the pants that really gets momentum going. The problem is with a startup company say like Intel or its predecessor, Fairchild Semiconductor, there’s more than one charismatic individual. And keeping that organization stitched together even just loosely is probably the biggest challenge of all. So I’ll be curious to read this book Michael Malone and see how it compares to the other books in my anthology of organization theory in high tech. Should be a good, worthwhile read.

 

Advertisements

The Memory Revolution | Sven Andersson | EE Times

A 256Kx4 Dynamic RAM chip on an early PC memor...
A 256Kx4 Dynamic RAM chip on an early PC memory card. (Photo by Ian Wilson) (Photo credit: Wikipedia)

In almost every kind of electronic equipment we buy today, there is memory in the form of SRAM and/or flash memory. Following Moores law, memories have doubled in size every second year. When Intel introduced the 1103 1Kbit dynamic RAM in 1971, it cost $20. Today, we can buy a 4Gbit SDRAM for the same price.

via The Memory Revolution | Sven Andersson | EE Times

Read now, a look back from an Ericsson engineer surveying the use of solid state, chip-based memory in electronic devices. It is always interesting to know how these things start and evolved over time. Advances in RAM design and manufacture are the quintessential example of Moore’s Law even more so than the advances in processors during the same time period. Yes CPUs are cool and very much a foundation upon which everything else rests (especially dynamic ram storage). But remember this Intel didn’t start out making microprocessors, they started out as a dynamic RAM chip company at a time that DRAM was just entering the market. That’s the foundation upon which even Gordon Moore knew the rate at which change was possible with silicon based semiconductor manufacturing.

Now we’re looking at mobile smartphone processors and System on Chip (SoC) advancing the state of the art. Desktop and server CPUs are making incremental gains but the smartphone is really trailblazing in showing what’s possible. We went from combining the CPU with the memory (so-called 3D memory) and now graphics accelerators (GPU) are in the mix. Multiple cores and soon fully 64bit clean cpu designs are entering the market (in the form of the latest model iPhones). It’s not just a memory revolution, but it is definitely a driver in the market when we migrated from magnetic core memory (state of the art in 1951-52 while developed at MIT) to the Dynamic RAM chip (state of the art in 1968-69). That drive to develop the DRAM brought all other silicon based processes along with it and all the boats were raised. So here’s to the DRAM chip that helped spur the revolution. Without those shoulders, the giants of today wouldn’t be able to stand.

Enhanced by Zemanta

AnandTech – The Intel Ivy Bridge Core i7 3770K Review

Similarly disappointing for everyone who isnt Intel, its been more than a year after Sandy Bridges launch and none of the GPU vendors have been able to put forth a better solution than Quick Sync. If youre constantly transcoding movies to get them onto your smartphone or tablet, you need Ivy Bridge. In less than 7 minutes, and with no impact to CPU usage, I was able to transcode a complete 130 minute 1080p video to an iPad friendly format—thats over 15x real time.

via AnandTech – The Intel Ivy Bridge Core i7 3770K Review.

QuickSync for anyone who doesn’t follow Intel’s own technology white papers and cpu releases is a special feature of Sandy Bridge era Intel CPUs. Originally its duty on Intel is as old as the Clarkdale series with embedded graphics (first round of the 32nm design rule). It can do things like just simply speeding up the process of decoding a video stream saved in a number of popular video formats VC-1, H.264, MP4, etc. Now it’s marketed to anyone trying to speed up the transcoding of video from one format to another. The first Sandy Bridge CPUs using the the hardware encoding portion of QuickSync showed incredible speeds as compared to GPU-accelerated encoders of that era. However things have been kicked up a further notch in the embedded graphics of the Intel Ivy Bridge series CPUs.

In the quote at the beginning of this article, I included a summary from the Anandtech review of the Intel  Core i7 3770 which gives a better sense of the magnitude of the improvement. The full 130 minute Blu-ray DVD was converted at a rate of 15 times real time, meaning for every minute of video coming off the disk, QuickSync is able to transcode it in 4 seconds! That is major progress for anyone who has followed this niche of desktop computing. Having spent time capturing, editing and exporting video I will admit transcoding between formats is a lengthy process that uses up a lot of CPU resources. Offloading all that burden to the embedded graphics controller totally changes that traditional impedance of slowing the computer to a crawl and having to walk away and let it work.

Now transcoding is trivial, it costs nothing in terms of CPU load. And any time it can be faster than realtime means you don’t have to walk away from your computer (or at least not for very long), but 10X faster than real time makes that doubly true. Now we are fully at 15X realtime for a full length movie. The time spent is so short you wouldn’t ever have a second thought about “Will this transcode slow down the computer?” It won’t in fact you can continue doing all your other work, be productive, have fun and continue on your way just as if you hadn’t just asked your computer to do the most complicated, time consuming chore that (up until now) you could possibly ask it to do.

Knowing this application of the embedded graphics is so useful for desktop computers makes me wonder about Scientific Computing. What could Intel provide in terms of performance increases for simulations and computation in a super-computer cluster? Seeing how hybrid super computers using nVidia Tesla GPU co-processors mixed with Intel CPUs have slowly marched up the list of the Top 500 Supercomputers makes me think Intel could leverage QuickSync further,. . . Much further. Unfortunately this performance boost is solely dependent on a few vendors of proprietary transcoding software. The open software developers do not have an opening into the QuickSync tech in order to write a library that will re-direct a video stream into the QuickSync acceleration pipeline. When somebody does accomplish this feat, it may be shortly after when you see some Linux compute clusters attempt to use QuickSync as an embedded algorithm accelerator too.

Timeline of Intel processor codenames includin...
Timeline of Intel processor codenames including released, future and canceled processors. (Photo credit: Wikipedia)

AMD Snatches New-Age Server Maker From Under Intel | Wired Enterprise | Wired.com

Image representing AMD as depicted in CrunchBase
Image via CrunchBase

Chip designer and chief Intel rival AMD has signed an agreement to acquire SeaMicro, a Silicon Valley startup that seeks to save power and space by building servers from hundreds of low-power processors.

via AMD Snatches New-Age Server Maker From Under Intel | Wired Enterprise | Wired.com.

It was bound to happen eventually, I guess. SeaMicro has been acquired by AMD. We’ll see what happens as a result as SeaMicro is a customer of Intel’s Atom chips and now most recently Xeon server chips as well. I have no idea where this is going or what AMD intends to do, but hopefully this won’t scare off any current or near future customers.

SeaMicro’s competitive advantage has been and will continue to be the development work they performed on that custom ASIC chip they use in all their systems. That bit of intellectual property was in essence the reason AMD decided to acquire SeaMicro and hopefully let it gain an engineering advantage for systems it might put out on the market in the future for large scale Data Centers.

While this is all pretty cool technology, I think that SeaMicro’s best move was to design its ASIC so that it could take virtually any common CPU. In fact, SeaMicro’s last big announcement introduced its SM10000-EX option, which uses low-power, quad-core Xeon processors to more than double compute performance while still keeping the high density, low-power characteristics of its siblings.

via SeaMicro acquisition: A game-changer for AMD • The Register.

So there you have it Wired and The Register are reporting the whole transaction pretty positively. Looks on the surface to be a win for AMD as it can design new server products and get them to market quickly using the SeaMicro ASIC as a key ingredient. SeaMicro can still service it’s current customers and eventually allow AMD to up sell or upgrade as needed to keep the ball rolling. And with AMD’s Fusion architecture marrying GPUs with CPU cores who knows what cool new servers might be possible? But as usual the nay-sayers the spreaders of Fear, Uncertainty and Doubt have questioned the value of SeaMicro and their original product he SM-10000.

Diane Bryant, the general manager of Intel’s data center and connected systems group at a press conference for the launch of new Xeon processors had this to say, ““We looked at the fabric and we told them thereafter that we weren’t even interested in the fabric,” when asked about SeaMicro’s attempt to interest Intel in buying out the company. To Intel there’s nothing special enough in the SeaMicro to warrant buying the company. Furthermore Bryant told Wired.com:

“…Intel has its own fabric plans. It just isn’t ready to talk about them yet. “We believe we have a compelling solution; we believe we have a great road map,” she said. “We just didn’t feel that the solution that SeaMicro was offering was superior.”

This is a move straight out of Microsoft’s marketing department circa 1992 where they would pre-announce a product that never shipped was barely developed beyond a prototype stage. If Intel is really working on this as a new product offering you would have seen an announcement by now, rather than a vague, tangential reference that appears more like a parting shot than a strategic direction. So I will be watching intently in the coming months and years if needed to see what if any Intel ‘fabric technology’ makes its way from the research lab, to the development lab and to final product shipping. However don’t be surprised if this is Intel attempting to undermine AMD’s choice to purchase SeaMicro. Likewise, Forbes.com later reported from a representative from SeaMicro that their company had not tried to encourage Intel to acquire SeaMicro. It is anyone’s guess who is really correct and being 100% honest in their recollections. However I am still betting on SeaMicro’s long term strategy of pursuing low power, ultra dense, massively parallel servers. It is an idea whose time has come.

Image representing Intel as depicted in CrunchBase
Image via CrunchBase

ARM Pitches Tri-gate Transistors for 20nm and Beyond

English: I am the author of this image.
Image via Wikipedia

. . . 20 nm may represent an inflection point in which it will be necessary to transition from a metal-oxide semiconductor field-effect transistor MOSFET to Fin-Shaped Field Effect Transistors FinFET or 3D transistors, which Intel refers to as tri-gate designs that are set to debut with the companys 22 nm Ivy Bridge product generation.

via ARM Pitches Tri-gate Transistors for 20nm and Beyond.

Three Dimensional transistors in the news again. Previously Intel announced they were adopting a new design for their next generation next smaller design rule for the Ivy Bridge generation Intel CPUs. Now ARM is also doing work to integrate similar technology into their ARM cpu cores as well. No doubt in order to lower Thermal Design Point and maintain clock speed as well are both driving this move to refine and narrow the design rules for the ARM architecture. Knowing Intel is still the top research and development outfit for silicon semi-conductors would give pause to anyone directly competing with them, but ARM is king of the low power semi-conductor and keeping pace with Intel’s design rules is an absolute necessity.

I don’t know how quickly ARM is going to be able to get a licensee to jump onboard and adopt the new design. Hopefully a large operation like Samsung can take this on and get the chip into it’s design, development, production lines at a chip fabrication facility as soon as possible. Likewise other contract manufacturers like Taiwan Semiconductor Manufacturing Company (TSMC) should also try to get this chip into their facilities quickly too. That way the cell-phone and tablet markets can benefit too as they use a lot of ARM licensed cpu cores and similar intellectual property in their shipping products. And my interest is not so much invested in the competition between Intel and ARM for low power computing but more the overall performance of any single ARM design once it’s been in production for a while and optimized the way Apple designs its custom CPUs using ARM licensed cpu cores. The single most outstanding achievement of Apple in their design and production of the iPad is the battery charge duration of 10 hours. Which to date, is an achievement that has not been beaten, even by other manufacturers and products who also license ARM intellectual property. So if  the ARM design is good and can be validated and proto-typed with useful yields quickly, Apple will no doubt be the first to benefit, and by way of Apple so will the consumer (hopefully).

Schematic view (L) and SEM view (R) of Intel t...
Image via Wikipedia

AnandTech – Intel and Micron IMFT Announce Worlds First 128Gb 20nm MLC NAND

There’s a point of diminishing return for Flash memory where further shrinking the chips makes them less and less durable over time. Which has led me to believe there’s a ‘plateau’ of size/durability that will soon be reached by most Flash memory manufacturers. However Intel’s deep reserve of research in silicon semiconductors is helping lead the charge to the next generation of more dense, smaller Flash memory and Micron is partnering with them to help

English: NAND Flash memory circuit
Image via Wikipedia

The big question is endurance, however we wont see a reduction in write cycles this time around. IMFTs 20nm client-grade compute NAND used in consumer SSDs is designed for 3K – 5K write cycles, identical to its 25nm process.

via AnandTech – Intel and Micron IMFT Announce Worlds First 128Gb 20nm MLC NAND.

If true this will help considerably in driving down cost of Flash memory chips while maintaining the current level of wear and performance drop seen over the lifetime of a chip. Stories I have read previously indicated that Flash memory might not continue to evolve using the current generation of silicon chip manufacturing technology. Performance drops occur as memory cells wear out. Memory cells were wearing out faster and faster as the wires and transistors got smaller and narrower on the Flash memory chip.

The reason for this is memory cells have to be erased in order to free them up and writing and erasing take a toll on the memory cell each time one of these operations is performed. Single Level memory cells are the most robust, and can go through many thousands even millions of write and erase cycles before they wear out. However the cost per megabyte of Single Level memory cells make it an Enterprise level premium price level for Corporate customers generally speaking. Two Level memory cells are much more cost effective, but the structure of the cells makes them less durable than Single Level cells. And as the wires connecting them get thinner and narrower, the amount of write and erase cycles they can endure without failing drops significantly. Enterprise customers in the past would not purchase products specifically because of this limitation of the Two level memory cell.

As companies like Intel and Samsung tried to make Flash memory chips smaller and less expensive to manufacture, the durability of the chips became less and less. The question everyone asked is there a point of diminishing return where smaller design rules, thinner wires is going to make chips so fragile? The solution for most manufacturers is to add spare memory cells, “over-providing” so that when a cell fails, you can unlock a spare and continue using the whole chip. The over -provisioning no so secret trick has been the way most Solid State Disks (SSDs) have handled the write/erase problem for Two Level memory cells. But even then, the question is how much do you over-provision? Another technique used is called wear-levelling where a memory controller distributes writes/erases over ALL the chips available to it. A statistical scheme is used to make sure each and every chip suffers equally and gets the same number of wear and tear apllied to it. It’s difficult balancing act manufacturers of Flash Memory and storage product manufacturers who consume those chips to make products that perform adequately, do not fail unexpectedly and do not cost too much for laptop and desktop manufacturers to offer to their customers.

If Intel and Micron can successfully address the fragility of Flash chips as the wiring and design rules get smaller and smaller, we will start to see larger memories included in more mobile devices. I predict you will see iPhones and Samsung Android smartphones with upwards of 128GBytes of Flash memory storage. Similarly, tablets and ultra-mobile laptops will also start to have larger and larger SSDs available. Costs should stay about where they are now in comparison to current shipping products. We’ll just have more products to choose from, say like 1TByte SSDs instead of the more typical high end 512GByte SSDs we see today. Prices might also come down, but that’s bound to take a little longer until all the other Flash memory manufacturers catch up.

A flash memory cell.
Image via Wikipedia: Wiring of a Flash Memory Cell

Intel Responds to Calxeda/HP ARM Server News (Wired.com)

Wired.com isn’t the best at following the Cloud Data Industry. In fact at least they partially want to keep their advertisers happy so they will publish a Fear, Uncertainty and Doubt raising response direct from an Intel PR Engineer. Happily the Intel folks aren’t even fully aware of what people are doing with their SeaMicro and Quanta SQ-2 boxes and continue to beat the drum on Virtualized Servers on Multi-core, high-clocked chips. That’s the old school thinking on what a Compute Cloud can be. The New School says put the cloud in a single box, let the clock run slower and use less power and everybody wins. Read On:

Now, you’re probably thinking, isn’t Xeon the exact opposite of the kind of extreme low-power computing envisioned by HP with Project Moonshot? Surely this is just crazy talk from Intel? Maybe, but Walcyzk raised some valid points that are worth airing.via Cloudline | Blog | Intel Responds to Calxeda/HP ARM Server News: Xeon Still Wins for Big Data.

Structure of the TILE64 Processor from Tilera
Image via Wikipedia: Tile64 mesh network processor from Tilera
Image representing Tilera as depicted in Crunc...
Image via CrunchBase

So Intel gets an interview with a Conde-Nast writer for a sub-blog of Wired.com. I doubt too many purchasers or data center architects consult Cloudline@Wired.com. But all the same, I saw through many thinly veiled bits of handwaving and old saws from Intel saying, “Yes, this exists but we’re already addressing it with our exiting product lines,. . .” So, I wrote in a comment to this very article. Especially regarding a throw-away line mentioning the ‘future’ of the data center and the direction the Data Center and Cloud Computing market was headed. However the moderator never published the comment. In effect, I raised the Question: Whither Tilera? And the Quanta SM-2 server based on the Tilera Chip?

Aren’t they exactly what is described by the author John Stokes as a network of cores on a chip? And given the scale of Tilera’s own product plans going into the future and the fact they are not just concentrating on Network gear but actual Compute Clouds too, I’d say both Stokes and Walcyzk are asking the wrong questions and directing our attention in the wrong direction. This is not a PR battle but a flat out technology battle. You cannot win this with words and white papers but in fact it requires benchmarks and deployments and Case Histories. Technical merit and superior technology will differentiate the players in the  Cloud in a Box race. And this hasn’t been the case in the past as Intel has battled AMD in the desktop consumer market. In the data center Intel Fear Uncertainty and Doubt is the only weapon they have.

And I’ll quote directly from John Stokes’s article here describing EXACTLY the kind of product that Tilera has been shipping already:

“Instead of Xeon with virtualization, I could easily see a many-core Atom or ARM cluster-on-a-chip emerging as the best way to tackle batch-oriented Big Data workloads. Until then, though, it’s clear that Intel isn’t going to roll over and let ARM just take over one of the hottest emerging markets for compute power.”

The key phrase here is cluster on a chip, in essence exactly what Tilera has strived to achieve with its Tilera64 based architecture. To review from previous blog entries of this website following the announcements and timelines published by Tilera: