Carpet Bomberz Inc.

Focusing on desktop, data center news and analysis

Archive for the ‘computers’ Category

DDR4 Heir-Apparent Makes Progress | EE Times

leave a comment »

The first DDR4 memory module was manufactured ...

The first DDR4 memory module was manufactured by Samsung and announced in January 2011. (Photo credit: Wikipedia)

The current paradigm has become increasingly complex, said Black, and HMC is a significant shift. It uses a vertical conduit called through-silicon via (TSV) that electrically connects a stack of individual chips to combine high-performance logic with DRAM die. Essentially, the memory modules are structured like a cube instead of being placed flat on a motherboard. This allows the technology to deliver 15 times the performance of DDR3 at only 30% of the power consumption.

via DDR4 Heir-Apparent Makes Progress | EE Times.

Even though DDR4 memory modules have been around in quantity for a short time, people are resistant to change. And the need for speed, whether it’s SSD’s stymied by SATA-2 data throughput or being married to DDR4 ram modules, is still pretty constant. But many manufacturers and analysts wonder aloud, “isn’t this speed good enough?”. That is true to an extent, the current OSes and chipset/motherboard manufacturers are perfectly happy cranking out product supporting the current state of the art. But know one wants to be the first to continue to push the ball of compute speed down the field. At least this industry group is attempting to get a plan in place for the next gen DDR memory modules. With any luck this spec will continue to evolve and sampled products will be sent ’round for everyone to review.

Given changes/advances in the storage and CPUs (PCIe SSDs, and 15 core Xeons), eventually a wall will be hit in compute per watt or raw I/O. Desktops will eventually benefit from any speed increases, but it will take time. We won’t see 10% better with each generation of hardware. Prices will need to come down before any of the mainstream consumer goods manufacturers adopt these technologies. But as previous articles have stated the “time to idle” measurement (which laptops and mobile devices strive to achieve) might be reason enough for the tablet or laptop manufacturers to push the state of the art and adopt these technologies faster than desktops.

Enhanced by Zemanta

Written by Eric Likness

March 27, 2014 at 3:00 pm

AnandTech | Testing SATA Express And Why We Need Faster SSDs

leave a comment »

PCIe- und PCI-Slots im Vergleich

PCIe- und PCI-Slots im Vergleich (Photo credit: Wikipedia)

Even a PCIe 2.0 x2 link offers about a 40% increase in maximum throughput over SATA 6Gbps. Like most interfaces, PCIe 2.0 isn’t 100% efficient and based on our internal tests the bandwidth efficiency is around 78-79%, so in the real world you should expect to get ~780MB/s out of a PCIe 2.0 x2 link, but remember that SATA 6Gbps isn’t 100% either (around 515MB/s is the typical maximum we see). The currently available PCIe SSD controller designs are all 2.0 based but we should start to see some PCIe 3.0 drives next year. We don’t have efficiency numbers for 3.0 yet but I would expect to see nearly twice the bandwidth of 2.0, making +1GB/s a norm.

via AnandTech | Testing SATA Express And Why We Need Faster SSDs.

As I’ve watched the SSD market slowly grow and bloom it does seem as though the rate at which big changes occur has slowed. The SATA controllers on the drives themselves were kicked up a notch as the transition from SATA-1 to SATA-2 gave us consistent 500MB/sec read/write speeds. And that has stayed stable forever due to the inherent limit of SATA-2. I had been watching very closely developments in PCIe based SSDs but the prices were  always artificially high due to the market for these devices being data centers. Proof positive of this is Fusion-io catered mostly to two big purchasers of their product, Facebook and Apple. Subsequently their prices always put them in the enterprise level $15K for one PCIe slot device (at any size/density of storage).

Apple has come to the rescue in every sense of the word by adopting PCIe SSDs as the base level SSD for their portable computers. Starting last Summer 2013 Apple started released Mac Book Pro laptops with PCIe SSDs and then eventually started designing them into the Mac Book Air as well. The last step was to fully adopt it in their desktop Mac Pro (which has been slow to hit the market). The performance of the PCIe SSD in the Mac Pro as compared to any other shipping computer is the highest for a consumer level product. As the Mac gains some market share for all computers being shipped, Mac buyers are gaining more speed from their SSD as well.

So what further plans are in the works for the REST of the industry? Well SATA-express seems to be a way forward for the 90% of the market still buying Windows PCs. And it’s a new standard being put forth by the SATA-IO standards committee. With any luck the enthusiast market motherboard manufacturers will adopt it as fast as it passes the committees, and we’ll see an Anandtech or Tom’s Hardware guide review doing a real benchmark and analysis of how well it matches up against the previous generation hardware.

Enhanced by Zemanta

Written by Eric Likness

March 20, 2014 at 3:00 pm

SanDisk Crams 128GB on microSD Card: A World First

with 2 comments

English: A 512 MB Kingston microSD card next t...

English: A 512 MB Kingston microSD card next to a Patriot SD adapter (left) and miniSD adapter (middle). (no original description) (Photo credit: Wikipedia)

This week during Mobile World Congress 2014, SanDisk introduced the world’s highest capacity microSDXC memory card, weighing a hefty 128 GB. That’s a huge leap in storage compared to the 128 MB microSD card launched 10 years ago.

via SanDisk Crams 128GB on microSD Card: A World First.

Amazing to think how small the form factor and how large the storage size has gotten with microSD format memory cards. I remember the introduction of SDXC cards and the jump from 32GB to 64GB flash SD sized cards. It didn’t take long after that before the SDXC format shrunk down to microSD format. Given the size and the options to expand the memory on certain devices (noticeably Apple is absent from this group), the size of the memory card is going to allow a lot longer timeline for the storage of pictures, music and video on our handheld devices. Prior to this, you would have needed a much larger m2 or mSATA storage card to achieve this level of capacity. You would have needed to have a tablet or a netbook to plug-in those larger memory cards.

Now you can have 128GB at your disposal just by dropping $200 at Amazon. Once you’ve installed it on your Samsung Galaxy you’ve got what would be a complete upgrade to a much more expensive phone (especially if it was an iPhone). I also think a SDXC microSD card would lend itself for moving a large amount of data in a device like one of these hollowed out nickels: http://www.amazon.com/2gb-MicroSD-Bundle-Mint-Nickel/dp/B0036VLT28

My interest in this would be taking a cell phone overseas and going through U.S. Customs and Immigration where it’s been shown in the past they will hold onto devices for further screening. If I knew I could keep 128GB of storage hidden in a metal coin that passed through the baggage X-ray without issue, I would feel a greater sense of security. A card this size is practically as big as the current hard drive on my home computer and work laptops. It’s really a fundamental change in the portability of a large quantity of personal data outside the series of tubes called the Interwebs. Knowing that stash could be kept away from prying eyes or casual security of hosting providers would certainly give me more peace of mind.

Enhanced by Zemanta

Written by Eric Likness

March 10, 2014 at 3:00 pm

Posted in computers, flash memory, mobile, SSD

Tagged with ,

AMD Launches First ARM-based Server CPU | EE Times

leave a comment »

Image representing AMD as depicted in CrunchBase

Image via CrunchBase

In addition, AMD is planning to contribute to the Open Compute Project with a new micro-server design that utilizes the Opteron A-series, along with other architecture specifications for motherboards that Facebook helped developed called “Group Hug,” an agnostic server board design that can support traditional x86 processors, as well as ARM chips.

via AMD Launches First ARM-based Server CPU | EE Times.

Kudos to Facebook as they still continue support for the Open Compute project which they spearheaded some years back to encourage more widespread expertise and knowledge of large scale data centers. This new charge is to allow a pick-and-choose, best of breed kind of design whereby a CPU is not a fixed quantity but can be chosen or changed like a hard drive or RAM module. And with the motherboard firmware remaining more or less consistent regardless of the CPU chosen. This would allow mass customization based solely on the best CPU for a given job (HTTP, DNS, Compute, Storage, etc). And the spare capacity might be allowed to erode a little so that any general CPU could be somewhat more aggressively scheduled while some of it’s former, less efficient services could be migrated to more specialist mobile CPUs on another cluster. Each CPU doing the set of protocols, services it inherently does best. This flies further in the face of always choosing general compute style CPUs and letting the software do most of the heavy lifting once the programming is completed.

Enhanced by Zemanta

Written by Eric Likness

March 6, 2014 at 3:00 pm

10 Reasons OpenCL Will Change Your Design Strategy & Market Position | EE Times

leave a comment »

OpenCL logo

OpenCL logo (Photo credit: Wikipedia)

OpenCL is a breakthrough precisely because it enables developers to accelerate the real-time execution of their algorithms quickly and easily — particularly those that lend themselves to the considerable parallel processing capabilities of FPGAs (which yield superior compute densities and far better performance/Watt than CPU- and GPU-based solutions)

via 10 Reasons OpenCL Will Change Your Design Strategy & Market Position | EE Times.

There’s still a lot of untapped energy available with the OpenCL programming tools. Apple is still the single largest manufacturer who has adopted OpenCL through a large number of it’s products (OS and App software). And I know from reading about super computing on GPUs that some large scale hybrid CPU/GPU computers have been ranked worldwide (the Chinese Tiahne being the first and biggest example). This article from EETimes encourages anyone with a brackground in C programming to try and give it a shot, see what algorithms could stand to be accelerated using the resources on the motherboard alone. But being EETimes they are also touting the benefits of using FPGAs in the mix as well.

To date the low-hanging fruit for desktop PC makers and their peripheral designers and manufacturers has been to reuse the GPU as massively parallel co-processor where it makes sense. But as the EETimes writer emphasizes, FPGAs can be equal citizens too and might further provide some more flexible acceleration. Interest in the FPGA as a co-processor for desktop to higher end enterprise data center motherboards was brought to the fore by AMD back in 2006 with the Torrenza cpu socket. The hope back then was that giving a secondary specialty processor (at the time an FPGA) might prove to be a market no one had addressed up to that point. So depending on your needs and what extra processors you might have available on your motherboard, OpenCL might be generic enough going forward to get a boost from ALL the available co-processors on your motherboard.

Whether or not we see benefits at the consumer level desktop is very dependent on the OS level support for OpenCL. To date the biggest adopter of OpenCL has been Apple as they needed an OS level acceleration API for video intensive apps like video editing suites. Eventually Adobe recompiled some of its Creative Suite apps to take advantage of OpenCL on MacOS. On the PC side Microsoft has always had DirectX as its API for accelerating any number of different multimedia apps (for playback, editing) and is less motivated to incorporate OpenCL at the OS level. But that’s not to say a 3rd party developer who saw a benefit to OpenCL over DirectX couldn’t create their own plumbing and libraries and get a runtime package that used OpenCL to support their apps or anyone who wanted to license this as part of a larger package installer (say for a game or for a multimedia authoring suite).

For the data center this makes way more sense than for the desktop, as DirectX isn’t seen as a scientific computing or means of allowing a GPU to be used as a numeric accelerator for scientific calculations. In this context, OpenCL might be a nice, open and easy to adopt library for people working on compute farms with massive numbers of both general purpose cpus and GPUs handing off parts of a calculation to one another over the PCI bus or across CPU sockets on a motherboard. So everyone’s needs are going to vary and widely vary in some cases. But OpenCL might help make that variation more easily addressed by having a common library that would allow one to touch all the co-processors available when a computation is needing to be sped up. So keep an eye on OpenCL as a competitor to any GPGPU style API and library put out by either nVidia or AMD or Intel. OpenCL might help people bridge differences between these different manufacturers too.

Image representing AMD as depicted in CrunchBase

Image via CrunchBase

Enhanced by Zemanta

Written by Eric Likness

March 3, 2014 at 3:00 pm

Posted in computers, fpga, gpu

Tagged with , ,

Seagates LaCie touts a 25TB not a typo box o disks for your DESK • The Register

with one comment

Image of a dismantled Seagate ST-225 harddisk....

Image of a dismantled Seagate ST-225 harddisk. 5¼″ MFM harddisk with a stepper actuator. Technical Data: Capacity: 21.4 MB Speed: 3600 rpm Average Seek Time: 65 ms Heads: 4 (Photo credit: Wikipedia)

Seagate subsidiary LaCie has launched a set of external storage boxes using a 5TB Seagate hard drive – even though disk maker Seagate hasn’t officially launched a 5TB part.

via Seagates LaCie touts a 25TB not a typo box o disks for your DESK • The Register.

There isn’t a whole lot in the way of activity when it comes to new designs and advances in spinning magnetic hard drives these days. The capacity wars have plateau’d around 4TB or so. The next big threshold to cross is either Shingled recording or HAMR (which uses a laser to heat the surface just prior to a write being committed to the disk). Due to the technical advances required and the adoption by a slightly smaller field of manufacturers (there’s not as many here as there was a while ago) the speed at which higher density devices hit the market has slowed. We saw 1TB and 2TB quickly show up one after the other, but slowly eventually the 3TB and 4TB drives followed. And usually they were priced at the high end premium part of the market. Now Seagate has stitched together a 5TB drive and LaCie is rushing it into a number of its own desktop and pro-sumer level products.

The assumption for now is Seagate has adopted the shingled recording method (which folds writing of blocks of data in an overlapping pattern to increase the density). We’ll see how well that design decision performs over the coming months as the early adopters and fanbois needing each and every last terabyte of storage they can get for their game roms, warez and film/music collections.

Enhanced by Zemanta

Written by Eric Likness

February 27, 2014 at 3:00 pm

Group Forms to Drive NVDIMM Adoption | EE Times

leave a comment »

English: flash memory

English: flash memory (Photo credit: Wikipedia)

As NAND flash is supplemented over the next few years by new technologies with improved durability and the same performance as system memory, “we’ll be able to start thinking about building systems where memory and storage are combined into one entity,” he said. “This is the megachange to computer architecture that SNIA is looking at now and preparing the industry for when these new technologies happen.”

via Group Forms to Drive NVDIMM Adoption | EE Times.

More good news on the Ultradimm, non-volatile DIMM front, a group is forming to begin setting standards for a new form factor. To day SanDisk  are the only company known to have architected and manufactured a shipping non-volatile DIMM memory product and then under contract only to IBM for the X 6 Intel-based server line. SanDisk is not shipping this or under contract to make this to anyone else by all reports, but that’s not keeping its competitors from getting a new product into heavy sample and QA testing. We might begin seeing a rush of different products, with varying interconnects and form factors all of which claim to plug-in to a typical RAM DIMM slot on an Intel based motherboard. But as the article on the IBM Ultradimm indicates this isn’t simple 1:1 swap out of DIMMs for Ultradimms. You need heavy lifting and revisions done on firmware/bios level to take advantage of the Ultradimms populating your DIMM slots on the motherboard. This is not easy, nor is it cheap and as far as OS support goes, you may need to see if your OS of choice will also help speed the plow by doing caching, loading and storing of memory differently once it’s become “aware” of the Ultradimms on the motherboard.

Without the OS and firmware support you would be wasting your valuable money and time trying to get a real boost of using the Ultradimms off the shelf in your own randomly chosen Intel based servers. IBM’s X6 line is just hitting the market and has been sampled by some heavy hitting real-time financial trading data centers to double-check that claims made about speed and performance. IBM’s used this period to really make sure the product makes a difference worth whatever they plan on charging as a premium for the Ultradimm on customized orders for the X6. But knowing further down the line a group is at least attempting to organize and set standards means this can become a competitive market for a new memory form factor and EVERYONE may eventually be able to buy something like an Ultradimm if they need it for their data center server farm. It’s too early to tell where this will lead, but re-using the JEDEC DIMM connection interface is a good start. If Intel wanted to help accelerate this, their onboard memory controllers could also become less DRAM specific and more generalized as a memory controller for anything plugged into the DIMM slots on the motherboard. That might prove the final step in really opening the market for a wave of Ultradimm designers and manufacturers. Keep an eye on Intel and see where their chipset architecture and more specifically their memory controller road maps lead for future support of NVDIMM or similar technologies.

 

Enhanced by Zemanta

Written by Eric Likness

February 20, 2014 at 3:00 pm

Anandtech – New LSI series of Flash Memory Controllers

with one comment

English: FPU LSI R3010

English: FPU LSI R3010 (Photo credit: Wikipedia)

May the SandForce be with you

Nice writeup from Anandtech regarding the press release from LSI about it’s new 3rd generation flash memory controllers. The 3000 series takes over from the 2200 and 1200 series that preceded it as the era of SSDs was just beginning to dawn (remember those heady days of 32GB SSD drives?). Like the Frontier days of old, things are starting to consolidate and find an equilibrium of price vs. performance. Commidity pricing rules the day, but SSDs much less PCIe Flash interfaces are just creeping into the high end of the market of Apple laptops and soon Apple desktops (apologies to the iMac which has already adopted the PCIe interface for its flash drives, but the Mac Pro is still waiting in the wings).

Things continue to improve in terms of future-proofing the interfaces. From SATA to PCIe there was little done to force a migration to one or the other interface as each market had its own peculiarities. SSDs were for the price conscious consumer level market, and PCIe was pretty much only for the enterprise. You had pick and choose your controller very wisely in order to maximize the return on a new device design. LSI did some heavy lifting according to Anandtech by refactoring, redesigning the whole controller thus allowing a manufacturer to buy one controller and use it either way as a SATA SSD controller or as an PCIe flash memory controller. Speeds of each interface indicate this is true at the theoretical throughput end of the scale. LSI reports the PCIe throughput it not too far off the theoretical MAX, (~1.45GB/sec range). Not bad for a chip that can also be use as an SSD controller at 500MB/sec throughput as well. This is going to make designers and hopefully consumers happy as well.

On a more technical note as written about in earlier articles mentioning the great Peak Flash memory density/price limit, LSI is fully aware of the memory architectures and the faillure rates, error rates they accumulate over time.

Written by Eric Likness

December 5, 2013 at 3:00 pm

Attempting to create an autounattend.xml file for work

leave a comment »

Image representing Windows as depicted in Crun...

Image via CrunchBase

 

Starting with this website tutorial I’m attempting to create a working config file that will allow me to install new Windows 7 Professional installs without having to interact or click any buttons.

 

http://sergeyv.com/blog/archive/2009/12/17/unattended-install-of-windows-7.aspx

 

Seems pretty useful so far as Sergey provides an example autounattend file that I’m using as a template for my own. I particularly like his RunOnce registry additions. This makes it so much more useful than just simply being an answer file to the base OS install. True it is annoying that questions that come up through successive reboots during the specialize pass on a Windows 7 fresh install. But this autounattend file does a whole lot of default presetting behind that  scenes, and that’s what I want when I’m trying create a brand new WIM image for work. I’m going to borrow those most definitely.

 

I also discovered an interesting sub-section devoted to joining a new computer to a Domain. Ever heard of djoin.exe?

 

http://technet.microsoft.com/en-us/library/Dd392267.aspx

 

Very interesting stuff where you can join the computer without first having to login to the domain controller and create a new account in the correct OU (which is what I do currently) and save a little time putting the Computer on the Domain. Sweet. I’m a hafta check this out further and get the syntax down just so… Looks like there’s also a switch to ‘reuse’ an existing account which would be really handy for computers that I rebuild and add back using the same machine name. That would save time too. Looks like it might be Win7/Server2008 specific and may not be available widely where I work. We have not moved our Domains to Server 2008 as far as I know.

 

djoin /provision /domain to be joined> /machine /savefile blob.txt

 

http://technet.microsoft.com/en-us/library/dd391977(v=WS.10).aspx (What’s new in Active Directory Domain Services in Windows Server 2008 R2: Offline Domain provisioning.

 

Also you want to be able to specify the path in AD where the computer account is going to be created. That requires knowing the full syntax of the LDAP:// path in AD

 

http://serverfault.com/questions/22866/how-can-i-determine-my-user-accounts-ou-in-a-windows-domain

 

There’s also a script you can download and run to get similar info that is Win 2000 era AD compliant: http://www.joeware.net/freetools/tools/adfind/index.htm

 

Random Thoughts just now: I could create a Generic WIM with a single folder added each time and Appended to the original WIM that included the Windows CAB file for that ‘make/model’ from Dell. Each folder then could have DPInst copied into it and run as a Synchronous command during OOBE pass for each time the WIM is applied with ImageX. Just need to remember which number to use for each model’s set of drivers. But the description field for each of those appended driver setups could be descriptive enough to make it user friendly. Or we could opt just to include the 960 drivers as a base set covering most bases and then provide links to the CAB files over \\fileshare\j\deviceDrivers\ and let DPInst recurse its way down the central store of drivers to do the cleanup phase.

 

OK, got a good autounattend.xml formulated. Should auto-activate and register the license key no problem-o. Can’t wait to try it out tomorrow when I get home on the test computer I got setup. It’s an Optiplex 960 and I’m going to persist all the Device Drivers after I run sysprep /generalize /shutdown /oobe and capture the WIM file. Got a ton of customizing yet to do on the Admin profile before it gets copied to the Default Profile on the sysprep step. So maybe this time round I’ll get it just right.

 

One big thing I have to remember is to set IE 8 to pass all logon information for the Trusted Sites Zone within the security settings. If I get that embedded into the thing once and for all I’ll have a halfway decent image that mirrors what we’re using now in Ghost. Next steps once this initial setup from a Win7 setup disk is perfected is to tweak the Administrator’s profile then set copy profile=true when I run Sysprep /generalize /oobe /config:unattend.xml (that config file is another attempt to filter the settings of what gets kept and what is auto-run before the final OOBE phase on the Windows setup). That will be the last step in the process.

 

 

Written by Eric Likness

December 21, 2012 at 5:50 pm

End of the hiatus

leave a comment »

I am now at a point in my daily work where I can begin posting to my blog once again. It’s not so much that I’M catching up, but more like I don’t care as much about falling behind. Look forward to more Desktop related posts as that is now my fulltime responsibility there where I work.

Posted from WordPress for Windows Phone

Written by Eric Likness

December 15, 2012 at 10:53 am

Posted in computers, support, technology

Tagged with

Follow

Get every new post delivered to your Inbox.

Join 286 other followers