Category: computers

Interesting pre-announced products that may or may not ship, and may or may not have an impact on desktop/network computing

  • SeaMicro Announces SM10000 Server with 512 Atom CPUs

    From where I stand, the SM10000 looks like the type of product that if you could benefit from having it, you’ve been waiting for something like it. In other words, you will have been asking for something like the SM10000 for quite a while already. SeaMicro is simply granting your wish.

    via SeaMicro Announces SM10000 Server with 512 Atom CPUs and Low Power Consumption – AnandTech :: Your Source for Hardware Analysis and News.

    This announcement that has been making the rounds this Monday June 14th has hit Wired.com, Anandtech, Slashdot, everywhere. It is a press release full court press. But it is an interesting product on paper for anyone who is doing analysis of datasets using large numbers of CPUs for regressions or large scale simulations too. And it is at it’s core virtual Machines, with virtual peripherals (memory, disk, networking). I don’t know how you benchmark something like this, but it is impressive in its low power consumption and size. It only takes up 10U of a 42U rack. It fits 512 CPUs in that 10U area as well.

    Imagine 324 of these plugged in and racked up

    This takes me back to the days of RLX Technologies when blade servers were so new nobody knew what they were good for. The top of the line RLX unit had 324 CPUs in a 42U rack. And each blade had a Transmeta Crusoe processor which was designed to run at a lower clock speed and much more efficiently from a thermal standpoint. When managed by the RLX chassis hardware and software and paired up to an F5 Networks load balancer BIG-IP, the whole thing was an elegant design. However the advantage of using Transmeta’s CPU was lost on a lot of people, including technology journalists who bashed it for being too low performance for most IT shops and data centers. Nobody had considered the total cost of ownership including the cooling and electricity. In those days, clock speed was the only measure of a server’s usefulness.

    Enter Google into the data center market, and the whole scale changes. Google didn’t care about clock speed nearly as much as lowering its total overall costs for its huge data centers. Even the technical journalists began to understand the cost savings of lowering the clock speed a few hundred megahertz and placing servers more densely into a fixed sized data center. Movements in the High Performance computing also led to large scale installations of commodity servers being all bound together into one massively parallel super computer. More space was needed for physical machines racked up in the data centers. Everyone could see the only way to build out was to build more data centers, build bigger data centers or pack more servers into the existing footprint of current data centers. Manufacturers like Compaq got into the Blade server market, along with IBM and Hewlett Packard. Everyone engineered their own proprietary interfaces and architectures, but all of them focused on the top of the line server CPUs from Intel. As a result, the heat dissipation was enormous and the densities of these blade centers was pretty low (possibly 14 CPUs in a 4U rack mount).

    Blue Gene super computer has high density motherboards
    Look at all those CPUs on one motherboard!

    IBM began to experiment with lower clocked PowerPC chips in a massively parallel super computer called the Blue Gene. In my opinion this started to change people’s belief about what direction data center architectures could go. The density of the ‘drawers’ in the Blue Gene server cabinets is pretty high. Lot more CPUs, power supplies, storage and RAM in each unit than in a comparable base level commodity server from Dell or HP (the previous most common building block for the massively parallel super computers). Given these trends it’s very promising to see what Seamicro has done with its first product. I’m not saying this is a super computer in a 10U box, but there are plenty of workloads that would fit within the scope of this server’s capabilities. And what’s cooler is the virtual abstraction of all the hardware from the RAM, to the networking to the storage. It’s like the golden age of IBM machine partitioning and Virtual Machines but on an Intel architecture. Depending on how quickly they can ramp up production and market their goods, Seamicro might be game changer or it might be a takeover target from the likes of HP or IBM.

  • AppleInsider | Inside the iPad: Apples A4 processor

    Another report, appearing in The New York Times in February, stated that Apple, Nvidia and Qualcomm were all working to develop their own ARM-based chips before noting that “it can cost these companies about $1 billion to create a smartphone chip from scratch.” Developing an SoC based on licensed ARM designs is not “creating a chip from scratch,” and does not cost $1 billion, but the article set off a flurry of reports that said Apple has spent $1 billion on the A4.

    via AppleInsider | Inside the iPad: Apples A4 processor.

    Thankyou AppleInsider for trying to set the record straight. I doubted the veracity of the NYTimes article when I saw that $1Billion figure thrown around (seems more like the price of a Intel chip development project which is usually from scratch). And knowing now from this article here (link to PA Semi historical account), that PA Semi made a laptop version of a dual core G5 chip, leads me to believe power savings is something they would be brilliant at engineering solutions for (G5 was a heat monster, meaning electrical power use was large). P.A. Semi was going to made the G5 power efficient enough to fit into a laptop and they did it, but Apple had already migrated to Intel chips for its laptops.

    Intrinsity + P.A. Semiconductor  + Apple = A4. Learning that Intrinsity is an ARM developer knits a nice neat picture of a team of chip designers, QA folks and validation folks who would all team up to make the A4 a resounding success. No truer mark of accomplishment can be shown for this effort than Walt Mossberg and David Pogue stating in reviews of the iPad yesterday they both got over 10 hours of run time from their iPads. Kudos to Apple, you may not have made a unique chip but you sure as hell made a well optimized one. Score, score, score.

  • Which way the wind blows: Flash Memory in the Data Center

    STEC Zeus IOPs solid state disk (ssd)
    This hard drive with a Fibre Channel interface launched the flash revolution in the datacenter

    First let’s just take a quick look backwards to see what was considered state of the art a year ago. A company called STEC was making Flash-based hard drives and selling them to big players in the enterprise storage market like IBM and NetApp. I depends solely on The Register for this information as you can read here: STEC becalmed as Fusion-io streaks ahead

    STEC flooded the market according to The Register and subsequently the people using their product were suddenly left with a glut of product using these Fibre Channel based Flash Drives (Solid State Disk Drives – SSD). And the gains in storage array performance followed. However the supply exceeded the demand and EMC is stuck with a raft of last year’s product that it hasn’t marked up and re-sold to its current customers. Which created an opening for a similar but sexier product Fusion-io and it’s PCIe based Flash hard drive. Why sexy?

    The necessity of a Fibre Channel interface for the Enterprise Storage market has long been an accepted performance standard. You need at minimum the theoretical 6GB/sec of FC interfaces to compete. But for those in the middle levels of the Enterprise who don’t own the heavy iron of giant multi-terabyte storage arrays, there was/is now an entry point through the magic of the PCIe 2.0 interface. Any given PC whether a server or not will have open PCIe slots in which a

    Fusio-io duo PCIe Flash cache card
    This is Fusion-io's entry into the Flash cache competition

    Fusion-io SSD card could be installed. That lower threshold (though not a lower price necessarily) has made Fusion-io the new darling for anyone wanting to add SSD throughput to their servers and storage systems. And now everyone wants Fusion-io not the re-branded STEC Fibre Channel SSDs everyone was buying a year ago.

    Anyone who has studied history knows in the chain of human relations there’s always another competitor out there that wants to sit on your head. Enter LSI and Seagate with a new product for the wealthy, well-heeled purchasing agent at your local data center: LSI and Seagate take on Fusion-io with flash

    Rather than create a better/smarter Fibre Channel SSD, LSI and Seagate are assembling a card that plugs into PCIe slot of a storage array or server to act as a high speed cache to the slower spinning disks. The Register refers to three form factors in the market now RamSan, STEC and Fusion-io. Because Fusion-io seems to have moved into the market at the right time and is selling like hot cakes, LSI/Seagate are targeting that particular form factor with it’s SSS6200.

    LSI's PCIe Flash hard drive card
    This is LSI's entry into the Flash hard drive market

    STEC is also going to create a product with a PCIe interface and Micron is going to design a product too. LSI’s product will not be available to ship until the end of the year.  In terms of performance the speeds being target are comparable between the Fusion-io Duo and the LSI SSS6200 (both using single level cell memory). So let the price war begin! Once we finally get some competition in the market I would hope the entry level price of Fusion-io (~$35,000) finally erodes a bit. It is a premium product right now intended to help some folks do some heavy lifting.

    My hope for the future is we could see something comparable (though much less expensive and scaled down) available on desktop machines. I don’t care if it’s built-in to a spinning SATA hard drive (say as a high speed but very large cache) or some kind of card plugging into a bus on the motherboard (like the failed Intel Speed Boost cache). If a high speed flash cache could become part of the standard desktop PC architecture to sit in front of monstrous single hard drives (2TB or higher nowadays) we might get faster response from our OS of choice, and possible better optimization of reads/writes to fairly fast but incredibly dense and possibly more error prone HDDs. I say this after reading about the big charge by Western Digital to move from smaller blocks of data to the 4K block.

    Much wailing and gnashing of teeth has accompanied the move recently by WD to address the issue of error correcting Cycle Redundancy Check (CRC) algorithms on the hard drives. Because 2Terabyte drives have so many 512bit blocks more and more time and space is taken up doing the CRC check as data is read and written to the drive. A larger block made up of 4096 bits instead of 512 makes the whole thing 4x less wasteful and possibly more reliable even if some space is wasted to small text files or web pages. I understand completely the implication and even more so, old-timers like Steve Gibson at GRC.com understand the danger of ever larger single hard drives. The potential for catastrophic loss of data as more data blocks need to be audited can numerically become overwhelming to even the fastest CPU and SATA bus. I think I remember Steve Gibson expressing doubts as to how large hard drives could theoretically become.

    Steve Gibson's SpinRite 6
    Steve Gibson's data recovery product SpinRite

    As the creator of the SpinRite data recovery utility he knows fundamentally the limits to the design of the Parallel ATA interface. Despite advances in speeds, error-correcting hasn’t changed and neither has the quality of the magnetic medium used on the spinning disks. One thing that has changed is the physical size of the blocks of data. They have gotten infinitesimally smaller with each larger size of disk storage. The smaller the block of data the more error correcting must be done. The more error-correcting the more space to write the error-correcting information. Gibson himself observers something as random as cosmic rays can flip bits within a block of data at those incredibly small scales of the block of data on a 2TByte disk.

    So my hope for the future is a new look at the current state of the art motherboard, chipset, I/O bus architecture. Let’s find a middle level, safe area to store the data we’re working on, one that doesn’t spontaneously degrade or is too susceptible to random errors (ie cosmic rays). Let the Flash Cache’s flow, let’s get better throughput and let’s put disks into the class of reliable but slower backing stores for our SSDs.

  • AppleInsider | Custom Apple A4 iPad chip estimated to be $1 billion investment

    In bypassing a traditional chip maker like Intel and creating its own custom ARM-based processor for the iPad, Apple has likely incurred an investment of about $1 billion, a new report suggests.

    via AppleInsider | Custom Apple A4 iPad chip estimated to be $1 billion investment.

    After reading the NYTimes article linked to within this article I can only conclude it’s a very generalized statement that it costs $1Billion to create a custom chip. The exact quote from the NYTimes article author Ashlee Vance is: “Even without the direct investment of a factory, it can cost these companies about $1 billion to create a smartphone chip from scratch.”

    Given that is one third the full price of building a  chip fabrication plant, why so expensive? What is the breakdown of those costs. Apple did invest money in PA Semiconductor to get some chip building expertise (they primarily designed chips that were fabricated at overseas contract manufacturing plants). Given Qualcomm has created the Snapdragon CPU using similar cpu cores from ARM Holdings Inc., they must have $1Billion to throw around too? Qualcomm was once dominant in the cell phone market licensing its CDMA technology to the likes of Verizon. But it’s financial success is nothing like the old days. So how does Qualcomm come up with $1Billion to develop the Snapdragon CPU for smartphones? Does that seem possible?

    Qualcomm and Apple are licensing the biggest building blocks and core intellectual property from ARM, all they need to do is route and place and verify the design. Where does the $1Billion figure come into it? Is it the engineers? Is it the masks for exposing the silicon wafers? I argue now as I did in my first posting about the Apple A4 chip, the chip is an adaptation of intellectual property, a license to a CPU design provided by ARM. It’s not literally created from ‘scratch’ starting with no base design or using completely new proprietary intellectual property from Apple. This is why I am confused. Maybe ‘from scratch’ means different things to different people.

  • Google Chrome bookmark sync

    I used to do this with a plug-in called Google Browser Sync on Mozilla back in the day. Since then, there’s a Firefox plug-in for Delicious that would help keep things synced up with that bookmark sharing site. But that’s not really what I wanted. I wanted Google Browser Sync, and now I finally have it again, cross platform.

    At long last Mac and PC versions of the Google Chrome web browser have the ability to save bookmarks to Google Docs and sync all the changes/additions/deletions to that single central file. I’m so happy I went through and did a huge house cleaning on all my accumulated bookmarks. Soon I will follow-up to find out which ones are dead and get everything ship-shape once again. It’s sad the utility of a program like browser sync is taken away. I assume it was based on arbitrary measures of popularity and success. Google’s stepping down and taking away Browser Sync gave some developers a competitive edge for a while, but I wanted Browser Sync no matter who it was that did the final software development. And now finally I think I have it again.

    Why is bookmark syncing useful? The time I’ve spent finding good sources of info on the web can be wasted if all I ever do is Google searches. The worst part is every Google search is an opportunity for Google to serve me AdWords related to my search terms. What I really want is the website that has a particularly interesting article or photo gallery. Keeping bookmarks direct to those websites bypasses Google as the middleman. Better yet, I have a link I can share with friends who need to find a well vetted, curated source of info. This is how it should be and luckily now with Chrome, I have it.

  • Apple A4 SOC unveiled – It’s an ARM CPU and the GPU! – Bright Side Of News*

    Getting back to Apple A4, Steve Jobs incorrectly addressed Apple A4 as a CPU. We’re not sure was this to keep the mainstream press enthused, but A4 is not a CPU. Or we should say, it’s not just a CPU. Nor did PA Semi/Apple had anything to do with the creation of the CPU component.

    via Apple A4 SOC unveiled – It’s an ARM CPU and the GPU! – Bright Side Of News*.

    Apple's press release image of the A4 SoC

    Interesting info on the Apple A4 System on Chip which is being used by the recently announced Ipad tablet computer. The world of mobile, low power processors is dominated by the designs of ARM Holdings Inc. Similarly ARM is providing the graphics processor intellectual property too. So in the commodity CPU/GPU and System on Chip (SoC) market ARM is the only way to go. You buy the license you layout the chip with all the core components you license and shop that around to a chip foundry. Samsung has a lot of expertise fabricating these chips made to order using the ARM designs. But Apparently another competitor Global Foundries is shrinking its design rules (meaning lower power and higher clock speeds) and may become the foundry of choice. Unfortunately outfits like iFixit can only figure out what chips and components go into an electronics device. They cannot reverse engineer the components going into the A4, and certainly anyone else would probably be sued by Apple if they did spill the beans on the A4’s exact layout and components. But  because everyone is working from the same set of Lego Blocks for the CPUs and GPUs and forming them into full Systems on a Chip, some similarities are going to occur.

    The heart of the new Apple A4 System on Chip

    One thing pointed out in this article is the broad adoption of the same clockspeed for all these ARM derived SoCs. 1Ghz is the clock speed across the board despite differences in manufacturers and devices. The reason being everyone is using the same ARM cpu cores and they  are designed to run optimally at the 1Ghz clock rate. So the more things change (meaning faster and faster time to market for more earth shaking designs) the more they stay the same (people adopt commodity CPU designs and become more similar in performance). It will take a big investment for Apple and PA Semiconductor to really TRULY differentiate themselves with a unique and different and proprietary CPU of any type. They just don’t have the time, though they may have the money. So when Jobs tells you something is exclusive to Apple, that may be true for industrial design. But for CPU/GPU/SoC, … Don’t Believe the Hype surround the Apple A4.

    Also check out AppleInsider’s coverage of this same topic.

    Update: http://www.nytimes.com/2010/02/02/technology/business-computing/02chip.html

    NYTimes weighs in on the Apple A4 chip and what it means for the iPad maintaining its competitive advantage. NYTimes gives Samsung more credit than Apple because they manufacture the chip. What they will not speculate on or guess at is ARM Holdings Inc. sale of licenses to it’s Cortex A-9 to Apple. They do hint that the nVidia Tegra CPU is going to compete directly against Apple’s iPad using the A4. However, as Steve Jobs has pointed out more than once, “Great Products Ship”. And anyone else in the market who has licensed the Cortex A-9 from ARM had better get going. You got 60 days or 90 days depending on your sales/marketing projections to compete directly with the iPad.

  • Some people are finding Google Wave useful

    Posterous Logo

    I use google wave every single day. I start off the day by checking gmail. Then I look at a few news sites to see if anything of interest happened. Then I open google wave: because thats where my business lives. Thats how I run a complicated network of collaborators, make hundreds of decisions every day and organise the various sites that made me $14.000 in december.
    On how Google Wave surprisingly changed my life – This is so Meta.

    I’m glad some people are making use of Google Wave. After the first big spurt of interest, sending invites out to people interest tapered off quickly. I would login and see no activity whatsoever. No one was coming back to see what people had posted. So like everyone else I stopped coming back too.

    Compare this also to the Facebook ebb and flow. I notice the NYTimes.com occasionally slagging Facebook with an editorial in their Tech News section. Usually the slagging is conducted by someone who I would classify as a pseudo technology enthusiast (the kind that doesn’t back up their files, then subsequently writes about it in an article to complain about it). Between iPhone upgrades and writing up the latest free web service they occasionally rip Facebook in order to get some controversy going.

    But as I’ve seen Facebook has a rhythm of less participation then periods of intense participation. Sometimes it’s lonely, people don’t post or read for months and months. It makes me wonder what interrupts their lives long enough that people stop reading or writing posts. I would assume Google Wave might suffer the same kind of ebb and flow even when used for ‘business’ purposes.

    So the question is, does any besides this lone individual on Posterous use Google Wave on a daily business for work purposes?

    logo
    Google Wave
  • Intel linked with HPC boost buy • The Register

    Comment With Intel sending its “Larrabee” graphics co-processor out to pasture late last year – before it even reached the market – it is natural to assume that the chip maker is looking for something to boost the performance of high performance compute clusters and the supercomputer workloads they run. Nvidia has its Tesla co-processors and its CUDA environment. Advanced Micro Devices has its FireStream co-processors and the OpenCL environment it has helped create. And Intel has been relegated to a secondary role.

    via Intel linked with HPC boost buy • The Register.

    Intel’s long term graphics accelerator project code-named “Larabee. It’s an unfortunate side effect of losing all that money by time delays on the project that forces Intel now to reuse the processor as a component in a High Performance Computer (so-called Super Computer). The competition have been providing hooks or links into their CPUs and motherboard for auxiliary processors or co-processors for a number of years. AMD notably created a CPU socket with open specs that FPGA’s could slide into. Field Programmable Gate Arrays are big huge general purpose CPUs with all kinds of ways to reconfigure the circuits inside of them. So huge optimizations can be made in hardware that were previously done in Machine Code/Assembler by the compilers for that particular CPU. Moving from a high level programming language to an optimized hardware implementation of an algorithm can speed a calculation up by several orders of magnitude (1,000 times in some examples). AMD has had a number of wins in some small niches of the High Performance Computing market. But not all algorithms are created equal, and not all of them lend themselves to implementation in hardware (FPGA or it’s cousin the ASIC). So co-processors are a very limited market for any manufacturer trying to sell into the HPC market. Intel isn’t going to garner a lot of extra sales by throwing development versions of Larabee out to the HPC developers. Another strike is the dependence on a PCI express bus for communications to the Larabee chipset. While PCI Express is more than fast enough for graphics processing, an HPC setup would prefer a CPU socket adjacent to the general purpose CPUs. The way AMD has designed their motherboards all sockets are on the same motherboard and can communicate directly to one another instead of using the PCI Express bus. Thus, Intel loses again trying to market Larabee in the HPC market. One can only hope that other secret code-name projects like the CPU with 80 cores will see the light of day soon when it makes a difference rather than suffer the opportunity costs of a very delayed launch of Larabee.

  • EDS mainframe goes titsup, crashes RBS cheque system • The Register

    HP managers are reaping the harvest of their deep cost-cutting at EDS, in the form of a massive mainframe failure that crippled some very large clients, including the taxpayer-owned bank RBS.

    via EDS mainframe goes titsup, crashes RBS cheque system • The Register.

    Royal Bank of Scotland
    Royal Bank of Scotland had a big datacenter outage

    The Royal Bank of Scotland is a National Bank and a big player in the European banking market. In Datacenter speak 5 Nines of availability is a guarantee the computer will stay up and running 99.999% of the time. This roughly calculates to 5.26 minutes of downtime allowed PER YEAR. This Royal Bank of Scotland computer was down 12Hours which tranlates to 99.8% Reliability. I think HP and EDS owe some people money for breaking the terms of their contract. It just proves outsourcing is not a cure-all for cost savings. You as the customer don’t know when they are going to start dropping head count to inflate the value of their stock on Wall Street. And when the economy soured, they dropped head count, like you wouldn’t believe. What does that mean for outstanding contracts to provide datacenter services? Well it means all bets are off, you get what ever they are willing to give you. If you are employed to make and manage contracts like this for your company be forewarned. Your outsourcing company can fire everyone at the drop of a hat.

  • Acrossair on the iPhone

    It looks like the iPhone OS 3.1 is going to do nothing more the open up the video feed on the camera so that you can overlay data on top of that video. In essence, the Augmented Reality is using your iPhone’s video as a “desktop” picture and placing items on top of that. Acrossair’s iPhone App, Nearest Tube uses the OpenGL libraries to skew and distort that data as you point the camera in different directions, thus providing a little more of a 3D perspective than say something like Layar which I have talked about previously on this blog. Chetan Demani, one of the founders of Acrossair also points out going forward any company making AR type apps will need to utilize existing location information and pre-load all the data they want to display. So the nirvana of just-in-time downloads of location data to overlay on your iPhone video image is not here,… and may not be for a while. What will differentiate the software producers though is the relevancy, and accuracy of their location information. So there will be some room for competition for a quite some time.

    He went on to say that it’s pretty simple to do AR applications using the new 3.1 APIs, due out in September. ” It’s a pretty straightforward API. There’s no complexity in there. All it does is it just switches on the video feed at the background. That’s the only API that’s published. All we’re doing is using that video feed at the back. It just displays the video feed as if it’s a live camera feed.

    via Augmenting Reality with the iPhone – O’Reilly Broadcast.