Image of a dismantled Seagate ST-225 harddisk. 5¼″ MFM harddisk with a stepper actuator. Technical Data: Capacity: 21.4 MB Speed: 3600 rpm Average Seek Time: 65 ms Heads: 4 (Photo credit: Wikipedia)
Seagate subsidiary LaCie has launched a set of external storage boxes using a 5TB Seagate hard drive – even though disk maker Seagate hasn’t officially launched a 5TB part.
There isn’t a whole lot in the way of activity when it comes to new designs and advances in spinning magnetic hard drives these days. The capacity wars have plateau’d around 4TB or so. The next big threshold to cross is either Shingled recording or HAMR (which uses a laser to heat the surface just prior to a write being committed to the disk). Due to the technical advances required and the adoption by a slightly smaller field of manufacturers (there’s not as many here as there was a while ago) the speed at which higher density devices hit the market has slowed. We saw 1TB and 2TB quickly show up one after the other, but slowly eventually the 3TB and 4TB drives followed. And usually they were priced at the high end premium part of the market. Now Seagate has stitched together a 5TB drive and LaCie is rushing it into a number of its own desktop and pro-sumer level products.
The assumption for now is Seagate has adopted the shingled recording method (which folds writing of blocks of data in an overlapping pattern to increase the density). We’ll see how well that design decision performs over the coming months as the early adopters and fanbois needing each and every last terabyte of storage they can get for their game roms, warez and film/music collections.
As NAND flash is supplemented over the next few years by new technologies with improved durability and the same performance as system memory, “we’ll be able to start thinking about building systems where memory and storage are combined into one entity,” he said. “This is the megachange to computer architecture that SNIA is looking at now and preparing the industry for when these new technologies happen.”
More good news on the Ultradimm, non-volatile DIMM front, a group is forming to begin setting standards for a new form factor. To day SanDisk are the only company known to have architected and manufactured a shipping non-volatile DIMM memory product and then under contract only to IBM for the X 6 Intel-based server line. SanDisk is not shipping this or under contract to make this to anyone else by all reports, but that’s not keeping its competitors from getting a new product into heavy sample and QA testing. We might begin seeing a rush of different products, with varying interconnects and form factors all of which claim to plug-in to a typical RAM DIMM slot on an Intel based motherboard. But as the article on the IBM Ultradimm indicates this isn’t simple 1:1 swap out of DIMMs for Ultradimms. You need heavy lifting and revisions done on firmware/bios level to take advantage of the Ultradimms populating your DIMM slots on the motherboard. This is not easy, nor is it cheap and as far as OS support goes, you may need to see if your OS of choice will also help speed the plow by doing caching, loading and storing of memory differently once it’s become “aware” of the Ultradimms on the motherboard.
Without the OS and firmware support you would be wasting your valuable money and time trying to get a real boost of using the Ultradimms off the shelf in your own randomly chosen Intel based servers. IBM’s X6 line is just hitting the market and has been sampled by some heavy hitting real-time financial trading data centers to double-check that claims made about speed and performance. IBM’s used this period to really make sure the product makes a difference worth whatever they plan on charging as a premium for the Ultradimm on customized orders for the X6. But knowing further down the line a group is at least attempting to organize and set standards means this can become a competitive market for a new memory form factor and EVERYONE may eventually be able to buy something like an Ultradimm if they need it for their data center server farm. It’s too early to tell where this will lead, but re-using the JEDEC DIMM connection interface is a good start. If Intel wanted to help accelerate this, their onboard memory controllers could also become less DRAM specific and more generalized as a memory controller for anything plugged into the DIMM slots on the motherboard. That might prove the final step in really opening the market for a wave of Ultradimm designers and manufacturers. Keep an eye on Intel and see where their chipset architecture and more specifically their memory controller road maps lead for future support of NVDIMM or similar technologies.
Nice writeup from Anandtech regarding the press release from LSI about it’s new 3rd generation flash memory controllers. The 3000 series takes over from the 2200 and 1200 series that preceded it as the era of SSDs was just beginning to dawn (remember those heady days of 32GB SSD drives?). Like the Frontier days of old, things are starting to consolidate and find an equilibrium of price vs. performance. Commidity pricing rules the day, but SSDs much less PCIe Flash interfaces are just creeping into the high end of the market of Apple laptops and soon Apple desktops (apologies to the iMac which has already adopted the PCIe interface for its flash drives, but the Mac Pro is still waiting in the wings).
Things continue to improve in terms of future-proofing the interfaces. From SATA to PCIe there was little done to force a migration to one or the other interface as each market had its own peculiarities. SSDs were for the price conscious consumer level market, and PCIe was pretty much only for the enterprise. You had pick and choose your controller very wisely in order to maximize the return on a new device design. LSI did some heavy lifting according to Anandtech by refactoring, redesigning the whole controller thus allowing a manufacturer to buy one controller and use it either way as a SATA SSD controller or as an PCIe flash memory controller. Speeds of each interface indicate this is true at the theoretical throughput end of the scale. LSI reports the PCIe throughput it not too far off the theoretical MAX, (~1.45GB/sec range). Not bad for a chip that can also be use as an SSD controller at 500MB/sec throughput as well. This is going to make designers and hopefully consumers happy as well.
On a more technical note as written about in earlier articles mentioning the great Peak Flash memory density/price limit, LSI is fully aware of the memory architectures and the faillure rates, error rates they accumulate over time.
Starting with this website tutorial I’m attempting to create a working config file that will allow me to install new Windows 7 Professional installs without having to interact or click any buttons.
Seems pretty useful so far as Sergey provides an example autounattend file that I’m using as a template for my own. I particularly like his RunOnce registry additions. This makes it so much more useful than just simply being an answer file to the base OS install. True it is annoying that questions that come up through successive reboots during the specialize pass on a Windows 7 fresh install. But this autounattend file does a whole lot of default presetting behind that scenes, and that’s what I want when I’m trying create a brand new WIM image for work. I’m going to borrow those most definitely.
I also discovered an interesting sub-section devoted to joining a new computer to a Domain. Ever heard of djoin.exe?
Very interesting stuff where you can join the computer without first having to login to the domain controller and create a new account in the correct OU (which is what I do currently) and save a little time putting the Computer on the Domain. Sweet. I’m a hafta check this out further and get the syntax down just so… Looks like there’s also a switch to ‘reuse’ an existing account which would be really handy for computers that I rebuild and add back using the same machine name. That would save time too. Looks like it might be Win7/Server2008 specific and may not be available widely where I work. We have not moved our Domains to Server 2008 as far as I know.
djoin /provision /domain to be joined> /machine /savefile blob.txt
Also you want to be able to specify the path in AD where the computer account is going to be created. That requires knowing the full syntax of the LDAP:// path in AD
Random Thoughts just now: I could create a Generic WIM with a single folder added each time and Appended to the original WIM that included the Windows CAB file for that ‘make/model’ from Dell. Each folder then could have DPInst copied into it and run as a Synchronous command during OOBE pass for each time the WIM is applied with ImageX. Just need to remember which number to use for each model’s set of drivers. But the description field for each of those appended driver setups could be descriptive enough to make it user friendly. Or we could opt just to include the 960 drivers as a base set covering most bases and then provide links to the CAB files over \\fileshare\j\deviceDrivers\ and let DPInst recurse its way down the central store of drivers to do the cleanup phase.
OK, got a good autounattend.xml formulated. Should auto-activate and register the license key no problem-o. Can’t wait to try it out tomorrow when I get home on the test computer I got setup. It’s an Optiplex 960 and I’m going to persist all the Device Drivers after I run sysprep /generalize /shutdown /oobe and capture the WIM file. Got a ton of customizing yet to do on the Admin profile before it gets copied to the Default Profile on the sysprep step. So maybe this time round I’ll get it just right.
One big thing I have to remember is to set IE 8 to pass all logon information for the Trusted Sites Zone within the security settings. If I get that embedded into the thing once and for all I’ll have a halfway decent image that mirrors what we’re using now in Ghost. Next steps once this initial setup from a Win7 setup disk is perfected is to tweak the Administrator’s profile then set copy profile=true when I run Sysprep /generalize /oobe /config:unattend.xml (that config file is another attempt to filter the settings of what gets kept and what is auto-run before the final OOBE phase on the Windows setup). That will be the last step in the process.
I am now at a point in my daily work where I can begin posting to my blog once again. It’s not so much that I’M catching up, but more like I don’t care as much about falling behind. Look forward to more Desktop related posts as that is now my fulltime responsibility there where I work.
AMD, and NVIDIA before it, has been trying to convince us of the usefulness of its GPUs for general purpose applications for years now. For a while it seemed as if video transcoding would be the killer application for GPUs, that was until Intel’s Quick Sync showed up last year.
There’s a lot to talk about when it comes to accelerated video transcoding, really. Not the least of which is HandBrake’s dominance generally for anyone doing small scale size reductions of their DVD collections for transport on mobile devices. We owe it all to the open source x264 codec and all the programmers who have contributed to it over the years, standing on one another’s shoulders allowing us to effortlessly encode or transcode gigabytes of video to manageable sizes. But Intel has attempted to rock the boat by inserting itself into the fray by tooling its QuickSync technology for accelerating the compression and decompression of video frames. However it is a proprietary path pursued by a few small scale software vendors. And it prompts the question, when is open source going to benefit from the proprietary Intel QuickSync technology? Maybe its going to take a long time. Maybe it won’t happen at all. Lucky for the HandBrake users in the audience some attempt is being made now to re-engineer the x264 codec to take advantage of any OpenCL compliant hardware on a given computer.
Paul Otellini, CEO of Intel (Photo credit: Wikipedia)
During Intels annual investor day on Thursday, CEO Paul Otellini outlined the companys plan to leverage its multi-billion-dollar chip fabrication plants, thousands of developers and industry sway to catch up in the lucrative mobile device sector, reports Forbes.
But what you are seeing is a form of Fear, Uncertainty and Doubt (FUD) being spread about to sow the seeds of mobile Intel processors sales. The doubt is not as obvious as questioning the performance of ARM chips, or the ability of manufacturers like Samsung to meet their volume targets and reject rates for each new mobile chip. No it’s more subtle than that and only noticeable to people who know details like what design rule Intel is currently using versus that which is used by Samsung or TSMC (Taiwan Semiconductor Manufacturing Corp.) Intel is currently just releasing its next gen 22nm chips as companies like Samsung are still trying to recoup their investment in 45nm and 32nm production lines. Apple is just now beginning to sample some 32nm chips from Samsung in iPad 2 and Apple TV products. It’s current flagship model iPad/iPhone both use a 45nm chip produced by Samsung. Intel is trying to say that the old generation technology while good doesn’t have the weight and just massive investment in the next generation chip technology. The new chips will be smaller, energy efficient, less expensive all the things need to make higher profit on consumer devices using them. However, Intel doesn’t do ARM chips, it has Atom and that is the one thing that has hampered any big design wins in cellphone or tablet designs to date. At any narrow size of the design rule, ARM chips almost always use less power than a comparably sized Atom chip from Intel. So whether it’s really an attempt to spread FUD, can easily be debated one way or another. But the message is clear, Intel is trying to fight back against ARM. Why? Let’s turn back the clock to March of this year in a previous article also appearing in Apple Insider:
This article is referenced in the original article quoted at the top of the page. And it points out why Intel is trying to get Apple to take notice of its own mobile chip commitments. Apple designs its own chips and has the manufacturing contracted out to a foundry. To date Samsung has been the sole source of the A-processors used in iPhones/iPod/iPad devices as Apple is trying to get TSMC up to speed to get a second source. Meanwhile sales of the Apple devices continues to grow handsomely in spite of these supply limits. More important to Intel is the blistering growth in spite of being on older foundry technology and design rules. Intel has a technological and investment advantage over Samsung now. They do not have a chip however that is BETTER than Apple’s in house designed ARM chip. That’s why the underlying message for Intel is that it has to make it’s Atom chip so much better than an A4, A5, A5X at ANY design ruling that Apple cannot ignore Intel’s superior design and manufacturing capability. Apple will still use Intel chips, but not in its flagship products until Intel achieves that much greater level of technical capability and sophistication in its Mobile microprocessors.
Intel is planning a two-pronged attack on the smartphone and tablet markets, with dual Atom lines going down to 14 nanometers and Android providing the special sauce to spur sales.
Lastly, Ian Thomson from The Register weighs in looking at what the underlying message from Intel really is. It’s all about the future of microprocessors for the consumer market. However the emphasis in this article is that Android OS devices whether they be phones or tablets or netbooks will be the way to compete AGAINST Apple. But again it’s not Apple as such it’s the microprocessor Apple is using in it’s best selling devices that scares Intel the most. Intel has since its inception been geared towards the ‘mainstream’ market selling into Enterprises and the Consumer area for years. It has milked the desktop PC revolution as it helped create it more or less starting with its forays into integrated micro-processor chips and chipsets. It reminds me a little of the old steel plants that existed in the U.S. during the 1970s as Japan was building NEW steel plants that used a much more energy efficient design, and a steel making technology that created a higher quality product. So less expensive higher quality steel was only possible by creating brand new steel plants. But the old line U.S. plants couldn’t justify the expense and so just wrapped up and shutdown operations all over the place. Intel while it is able to make that type of investment in newer technology is still not able to create the energy saving mobile processor that will out perform an ARM core cpu.
Profile shown on Thefacebook in 2005 (Photo credit: Wikipedia)
Codenamed “Knox,” Facebook’s storage prototype holds 30 hard drives in two separate trays, and it fits into a nearly 8-foot-tall data center rack, also designed by Facebook.The trick is that even if Knox sits at the top of the rack — above your head — you can easily add and remove drives. You can slide each tray out of the the rack, and then, as if it were a laptop display, you can rotate the tray downwards, so that you’re staring straight into those 15 drives.
Nice article around Facebook’s own data center design and engineering efforts. I think their approach is going to advance the state of the art way more than Apple/Google/Amazon’s own protected and secretive data center efforts. Although they have money and resources to plow into custom engineered bits for their data centers, Facebook can at least show off what its learned in the time that it has scaled up to a huge number of daily users. Not the least of which is expressed best by their hard drive rack design, a tool-less masterpiece.
This article emphasizes the physical aspects of the racks in which the hard drives are kept. It’s a tool-less design not unlike what I talked about in this article from a month ago. HP has adopted a tool-less design for its all-in-one (AIO) Engineering Workstation, see Introducing the HP Z1 Workstation. The video link will demonstrate the idea of a tool-less design for what is arguably not the easiest device to design without the use of proprietary connectors, fasteners, etc. I use my personal experience of attempting to upgrade my 27″ iMac as the foil for what is presented in the HP promo video. If Apple adopted a tool-less design for its iMacs there’s no telling what kind of aftermarket might spring up for the hobbyist or even the casually interested Mac owners.
I don’t know how much of Facebook’s decisions regarding their data center designs is driven by the tool-less methodology. But I can honestly say that any large outfit like Facebook and HP attempting to go tool-less in some ways is a step in the right direction. Comapnies like O’Reilly’s Make: magazine and iFixit.org are readily providing path for anyone willing to put in the work to learn how to fix the things they own. Also throw into that mix less technology and more Home Maintenance style outfits like Repair Clinic, while not as sexy technologically, I can vouch for their ability to teach me how to fix a fan in my fridge.
Borrowing the phrase, “If you can’t fix it, you don’t own it” let me say I wholeheartedly agree. And also borrowing from the old Apple commercial, Here’s to the crazy ones because they change things. They have no respect for the status quo, so lots stop throwing away those devices, appliances, automobiles and let’s start first by fixing some things.
NoSQL database supplier Couchbase says it is tweaking its key-value storage server to hook into Fusion-ios PCIe flash ioMemory products – caching the hottest data in RAM and storing lukewarm info in flash. Couchbase will use the ioMemory SDK to bypass the host operating systems IO subsystems and buffers to drill straight into the flash cache.
Can you hear it? It’s starting to happen. Can you feel it? The biggest single meme of the last 2 years Big Data/NoSQL is mashing up with PCIe SSDs and in memory databases. What does it mean? One can only guess but the performance gains to be had using a product like CouchBase to overcome the limits of a traditional tables/rows SQL database will be amplified when optimized and paired up with PCIe SSD data stores. I’m imagining something like a 10X boost in data reads/writes on the CouchBase back end. And something more like realtime performance from something that might have been treated previously like a Data Mart/Data warehouse. If the move to use the ioMemory SDK and directFS technology with CouchBase is successful you are going to see some interesting benchmarks and white papers about the performance gains.
What is Violin Memory Inc. doing in this market segment of tiered database caches? Violin is teaming with SAP to create a tiered cache for the HANA in memory databasefrom SAP. The SSD SAN array provided by Violin could be multi-tasked to do other duties (providing a cache to any machine on the SAN network). However, this product most likely would be a dedicated caching store to speed up all operations of a RAM based HANA installation, speeding up Online transaction processing and parallel queries on realtime data. No doubt SAP users could stand to gain a lot if they are already invested heavily into the SAP universe of products. But for the more enterprising, entrepreneurial types I think Fusio-io and Couchbase could help get a legacy free group of developers up and running with equal performance and scale. Which ever one you pick is likely to do the job once it’s been purchased, installed and is up and running in a QA environment.
Similarly disappointing for everyone who isnt Intel, its been more than a year after Sandy Bridges launch and none of the GPU vendors have been able to put forth a better solution than Quick Sync. If youre constantly transcoding movies to get them onto your smartphone or tablet, you need Ivy Bridge. In less than 7 minutes, and with no impact to CPU usage, I was able to transcode a complete 130 minute 1080p video to an iPad friendly format—thats over 15x real time.
QuickSync for anyone who doesn’t follow Intel’s own technology white papers and cpu releases is a special feature of Sandy Bridge era Intel CPUs. Originally its duty on Intel is as old as the Clarkdale series with embedded graphics (first round of the 32nm design rule). It can do things like just simply speeding up the process of decoding a video stream saved in a number of popular video formats VC-1, H.264, MP4, etc. Now it’s marketed to anyone trying to speed up the transcoding of video from one format to another. The first Sandy Bridge CPUs using the the hardware encoding portion of QuickSync showed incredible speeds as compared to GPU-accelerated encoders of that era. However things have been kicked up a further notch in the embedded graphics of the Intel Ivy Bridge series CPUs.
In the quote at the beginning of this article, I included a summary from the Anandtech review of the Intel Core i7 3770 which gives a better sense of the magnitude of the improvement. The full 130 minute Blu-ray DVD was converted at a rate of 15 times real time, meaning for every minute of video coming off the disk, QuickSync is able to transcode it in 4 seconds! That is major progress for anyone who has followed this niche of desktop computing. Having spent time capturing, editing and exporting video I will admit transcoding between formats is a lengthy process that uses up a lot of CPU resources. Offloading all that burden to the embedded graphics controller totally changes that traditional impedance of slowing the computer to a crawl and having to walk away and let it work.
Now transcoding is trivial, it costs nothing in terms of CPU load. And any time it can be faster than realtime means you don’t have to walk away from your computer (or at least not for very long), but 10X faster than real time makes that doubly true. Now we are fully at 15X realtime for a full length movie. The time spent is so short you wouldn’t ever have a second thought about “Will this transcode slow down the computer?” It won’t in fact you can continue doing all your other work, be productive, have fun and continue on your way just as if you hadn’t just asked your computer to do the most complicated, time consuming chore that (up until now) you could possibly ask it to do.
Knowing this application of the embedded graphics is so useful for desktop computers makes me wonder about Scientific Computing. What could Intel provide in terms of performance increases for simulations and computation in a super-computer cluster? Seeing how hybrid super computers using nVidia Tesla GPU co-processors mixed with Intel CPUs have slowly marched up the list of the Top 500 Supercomputers makes me think Intel could leverage QuickSync further,. . . Much further. Unfortunately this performance boost is solely dependent on a few vendors of proprietary transcoding software. The open software developers do not have an opening into the QuickSync tech in order to write a library that will re-direct a video stream into the QuickSync acceleration pipeline. When somebody does accomplish this feat, it may be shortly after when you see some Linux compute clusters attempt to use QuickSync as an embedded algorithm accelerator too.
Timeline of Intel processor codenames including released, future and canceled processors. (Photo credit: Wikipedia)