An image of a circuit with 17 memristors captured by an atomic force microscope. Each memristor is composed of two layers of titanium dioxide connected by wire. As electrical current is applied to one layer, the small signal resistance of the other layer is changed, which may in turn be used as a method to register data. HP makes memory from a once-theoretical circuit (Photo credit: Wikipedia)
If Hewlett-Packard (HPQ) founders Bill Hewlett and Dave Packard are spinning in their graves, they may be due for a break. Their namesake company is cooking up some awfully ambitious industrial-strength computing technology that, if and when it’s released, could replace a data center’s worth of equipment with a single refrigerator-size machine.
Memristor makes an appearance again as a potential memory technology for future computers. To date, flash memory has shown it can scale for a while far into the future. What benefit could there possibly be by adopting memristor? You might be able to put a good deal of it on the same die as the CPU for starters. Which means similar to Intel’s most recent i-Series CPUs with embedded graphics DRAM on the CPU, you could instead put an even larger amount of Memristor memory. Memristor is denser than DRAM and stays resident even after power is taken away from the circuit. Intel’s eDRAM scales up to 128MB on die, imagine how much Memristor memory might fit in the same space? The article states Memristor is 64-128 times denser than DRAM. I wonder if that also holds true from Intel’s embedded DRAM too? Even if it’s only 10x denser as compared to eDRAM, you could still fit 10x 128MB of Memristor memory embedded within a 4 core CPU socket. With that much available space the speed at which memory access needed to occur would solely be determined by the on chip bus speeds. No PCI or DRAM memory controller bus needed. Keep it all on die as much as possible and your speeds would scream along.
There are big downsides to adopting Memristor however. One drawback is how a CPU resets the memory on power down, when all the memory is non-volatile. The CPU now has to explicitly erase things on reset/shutdown before it reboots. That will take some architecture changes both on the hardware and software side. The article further states that even how programming languages use memory would be affected. Long term the promise of memristor is great, but the heavy lifting needed to accommodate the new technology hasn’t been done yet. In an effort to help speed the plow on this evolution in hardware and software, HP is enlisting the Open Source community. It’s hoped that some standards and best practices can slowly be hashed out as to how Memristor is accessed, written to and flushed by the OS, schedulers and apps. One possible early adopter and potential big win would be the large data center owners and Cloud operators.
In memory caches and databases are the bread and butter of the big hitters in Cloud Computing. Memristor might be adapted to this end as a virtual disk made up of memory cells on which a transaction log was written. Or could be pointed to by OS to be treated as a raw disk of sorts, only much faster. By the time the Cloud provider’s architects really optimized their infrastructure for Memristor, there’s no telling how flat the memory hierarchy could become. Today it’s a huge chain of higher and higher speed caches attached to spinning drives at the base of the pyramid. Given higher density like Memristor and physical location closer to the CPU core, one might eliminate a storage tier altogether for online analytical systems. Spinning drives might be relegated to the task of being storage tape replacements for less accessed, less hot data. HP’s hope is to deliver a computer optimized for Memristor (called “The Machine” in this article) by 2019 where Cache, Memory and Storage are no longer so tightly defined and compartmentalized. With any luck this will be a shipping product and will perform at the level they are predicting.
I manage iTunes U at the University where I work. I’m glad to know Apple keeps on working on it, adding features and fixing bugs. It now seems even more useful as a lightweight Course Management System. Had they tried doing this from the earliest days, it’s likely they could have gained some loyal fan base that eventually bought into Blackboard, Angel, Canvas etc.
Although Intel’s SSD DC P3700 is clearly targeted at the enterprise, the drive will be priced quite aggressively at $3/GB. Furthermore, Intel will be using the same controller and firmware architecture in two other, lower cost derivatives (P3500/P3600). In light of Intel’s positioning of the P3xxx family, a number of you asked for us to run the drive through our standard client SSD workload. We didn’t have the time to do that before Computex, but it was the first thing I did upon my return. If you aren’t familiar with the P3700 I’d recommend reading the initial review, but otherwise let’s look at how it performs as a client drive.
This is Part #2 of the full review Anandtech did on the Intel P3700 PCIe/NVMe card. It’s reassuring to know that Anandtech reports Intel’s got more than just the top end P3700 coming out on the market. Other price points will be competing too for the non-enterprise workload types. $3/GB puts it at the top of a desktop peripheral price for even a fanboy gamer. But for data center workloads and the prices that crowd pays this is going to be an easy choice. Intel’s P3700 as the Anandtech concludes is built not just for speed (peak I/O) but for consistency at all queue depths, file sizes and block sizes. If you’re attempting to budget a capital improvement in your Data Center and you want to quote the increases you’ll see, these benchmarks will be proof enough that you’ll get every penny back that you spent. No need to throw an evaluation unit into your test rig, testing lab and benchmarking it yourself.
As for the lower end models, you might be able to dip your toe, though not at the same performance level, in at the $600 price point. That will be an average to smallish 400GB PCIe card the Intel SSD DC P3500. But still the overall design and engineering is derived in part from the move from just a straight PCIe interface to one that harnesses more data lanes on the PCIe bus and connects to the BIOS via the NVMHCI drive interface. That’s what you’re getting for that price. If you’re very sensitive to price, do not purchase this product line. Samsung has you more than adequately covered under the old regime SSD-SATA drive technology. And even then the performance is nothing to sneeze at. But do know things are in flux with the new higher performance drive interfaces manufacturers will be marketing and selling to you soon. Remember roughly this is the order in which things are improving and of higher I/O:
And the incremental differences in the middle are small enough that you will only see benefits really if the price is cheaper for a slightly faster interface (say SATA SSD vs. SATA Express choose based on the price being dead equal, not necessarily just performance alone). Knowing what all these things do or even just what they mean and how that equates to your computer’s I/O performance will help you choose wisely over the next year to two years.
Gmail API, even just the name harkens back to 1991 era when Microsoft first bought out Consumer Softwares (http://en.wikipedia.org/wiki/Consumers_Software) and spun up it’s own mail server: Microsoft Mail for PC Networking (http://en.wikipedia.org/wiki/Microsoft_Mail). The underlying architecture was the almighty MAPI which eventually got all the corporate buyers and accounts hooked on Exchange Mail with MS Outlook clients. And the world has never been the same since. Let that be both an interesting factotum but a cautionary tale too. The first taste is ALWAYS free.
Fully agree, after the number of moves I went through prior to moving in with my girlfriend, I had pared down quite a bit. All my “stuff” could fit into my car. Which is exactly as it should be. If it can fit in your car, then you truly own it, it is YOURS.
As we clear out the house in order to move west, we’re processing a vast accumulation of things. This morning I hauled another dozen boxes of books from the attic, nearly all of which we’ll donate to the library. Why did I haul them up there in the first place? We brought them from our previous house, fourteen years ago. I could have spared myself a bunch of trips up and down the stairs by taking them directly to the library back then. But in 2000 we were only in the dawn of the era of dematerialization. You couldn’t count on being able to find a book online, search inside it, have a used copy shipped to you in a couple of days for a couple of dollars.
Now I am both shocked and liberated to realize how few things matter to me. I joke that all I really…
We don’t see infrequent blips of CPU architecture releases from Intel, we get a regular, 2-year tick-tock cadence. It’s time for Intel’s NSG to be given the resources necessary to do the same. I long for the day when we don’t just see these SSD releases limited to the enterprise and corporate client segments, but spread across all markets – from mobile to consumer PC client and of course up to the enterprise as well.
Big news in the SSD/Flash memory world at Computex in Taipei, Taiwan. Intel has entered the fray with Samsung and SandForce issuing a fully NVMe compliant set of drives running on PCIe cards. Throughputs are amazing, but the prices are overly competitive. You can enter the market for as low as $600 for a 400GB PCIe card running as an NVMe compliant drive. On Windows Server 2012 R2 and Windows 8.1 you get native support for NVMe drives. This is going to get really interesting. Especially considering all the markets and levels of consumers within the market. On the budget side is the SATA Express interface which is an attempt to factor out some of the slowness inherent in SSDs attached to SATA bus interfaces. Then there’s M.2 which is the smaller form factor PCIe based drive interface being adopted by manufacturers making light and small form factor tablets and laptops. That is a big jump past SATA altogether and also has a speed bump associated with it as it communicates directly with the PCIe bus. Last and most impressive of all is the NVMe devices announced by Intel with yet a further speed bump as it’s addressing multiple data lanes on PCI Express. Some concern trolls in the gaming community are quick to point out the data lanes are being lost to I/O when they already are maxing them out with their 3D graphics boards.
The route forward it seems would be Intel motherboard designs with a PCIe 3 interface with the equivalent data lanes for two full speed 16x graphics cards, but using that extra 16x lane to devote to I/O instead or maybe a 1.5X arrangement with a fully 16X lane and 2 more 8X lanes to handle regular I/O plus a dedicated 8X NVMe interface? It’s going to require some reengineering and BIOS updating no doubt to get all the speed out of all the devices simultaneously. That’s why I would also like to remind readers of the Flash-DIMM phenomenon as well sitting out there on the edges in the high speed, high frequency trading houses in the NYC metro area. We haven’t seen nor heard much since the original product announcement from IBM for the X6-series servers and the options for Flash-DIMMs on that product line. Smart Memory Technology (the prime designer/manufacturer of Flash-DIMMs for SanDisk) has now been bought out by SanDisk. Again now word on that product line now. Same is true for the Lenovo takeover of IBM’s Intel server product line (of which the X6-series is the jewel in the crown). Mergers and acquisitions have veiled and blunted some of these revolutionary product announcements, but I hope eventually that Flash-DIMMs see the light of day and gain full BIOS support and eventually make it into the Desktop computer market. As good as NVMe is going forward, I think we need too a mix of Flash-DIMM to see the full speed of the multi-core X86 Intel chips.
Combining the Kinect 3D realtime mapping with Oculus Rift 3D Goggles. This is pretty darned amazing. This is going to have huge applications in the Augmented Reality world. Can you imagine? Just do a 3D scan of the space, let the Kinect do the mapping, then pull the whole model into the Oculus Rift environment, and then you can start marking up/overlaying whatever objects, info, data that you want within that model. The more time it takes Oculus Rift to be released and marketed, the more potential applications people seem to invent for it.
Even though the SATA Express isn’t meant to be a long term architectural improvement over simple SATA or just PCIe drive interfaces, it is a step to help get more speed out of existing flash based SSDs. The lower cost is what drive manufacturers are attempting to address and attract chronic upgraders to adopt the new bus bridging technology. It’s not quite got the same design or longer term speed gains in the newer M.2 SATA spec, so be forewarned SATA Express may fall by the wayside. This is especially true if people understand the tech specs better and see through the marketing language aimed at the home brew computer DIY types. Those folks are very cost sensitive and unwilling to purchase based on specs alone, much less long term viability of a particular drive interface technology.
So if you are somewhat baffled why SSDs seem to max out at ~500MB/s read and write speeds, rest assured SATA Express might help. It’s cheap enough, but just be extra careful that both your drive and motherboard fully support the new interface at the BIOS level. That way you’ll see the speed gains you thought you would get by trading up. Longer term you might actually want to take a closer look at the most recent versions of the NVMe interface (PCI 3.0 with up to 8 PCI lanes being used simultaneously). NVMe is likely to be the longer term winner for SSD based interfaces. But the costs to date put it in the Data Center class purchasing area, not necessarily in the Desktop class. However Intel is releasing product with NVMe interfaces later this year with prices possibly as low ~$700 but the premium per Gigabyte of storage is still pretty high. Always use your best judgement and look at how long you plan on using the computer as is. If you know you’re going to be swapping at regular intervals, I say fine choose SATA Express today. Likely you will jump at the right time for the next best I/O drive interface as they hit the market. However if you’re thinking you’re going to stand and hold onto the current desktop as long as possible, take a long hard look at M.2 and the NVMe products coming out shortly.
If you are a LMS administrator, I want to personally thank you. I know that it is a job that rarely gets the recognition it deserves. I know that it is the first place someone goes when something goes wrong and yet, rarely appreciated when something goes right.
I feel your pain, I really do.
That said, you shouldn’t feel alone. Forget the anguish, because at this very instance someone out there is starting their first day as a LMS administrator. They may have been hired because of previous experience.
They may have been moved into the role from another role at the company. Heck, they may be doing another role and been told you have to do this, because of downsizing.
Regardless, they are entering the position either having the knowledge to succeed quickly or having nearly no knowledge and expecting the same.
Cavium will try to drive ARM SoCs into mainstream servers, challenging Intel’s Xeon x86 with a family of 28 nm devices using up to 48 2.5 GHz custom 64-bit ARM cores
Another entry into the massively multi-core low power server race. Since the fading of other competitors like Calxeda, SeaMicro there hasn’t been a lot of announcements or shipping products that promised to be the low-power vendor of choice. Each time an inventor or entrepreneur stepped up with a lower power or more core device, Intel would kind of blunt the advantage by doing a benchmark and claiming shutting cores off saves more power than using an inherently low power design. The race today as designed by Intel is race to sleep and that’s the benchmark by which they are measuring their own progress in the low power massively multi-core cpu market. However now Cavium is stepping up with an ARM based cpu with 48 cores. So let’s find out what we can about this new chip from this EE Times article.
It appears the manufacturing partner for this new product is Gigabyte who are creating a 2-socket motherboard for the 48-core ARM based CPU. The 48-core cpu is ARMv.8 based and addresses 64bits, so large amounts of RAM can be used with this architecture (a failing of past products from previous manufacturers attempting ARM based servers). Cavium has network processors in the market already using MIPS based CPUs and this new architecture using ARM based chips tries to leverage a lot of their expertise in the network processor market. Architecturally the motherboard interfaces and protocols are still in place, with only a cpu swap being the most noticeable difference. To Cavium is primarily known as a network processor manufacturer, but this move could push them into large scale data cloud type applications, with a tight binding to network operations supplied by their existing network processor products. Dates are still a little hazy, with the end of the calendar year being the most likely time a product has been developed, tested, manufactured and shipped.
I’m so happy to see the pressure being kept up in this one niche of computing. I still think ARM-based CPUs with massive amounts of cores being a new growth area. Similarly the move to 64bits takes away one of the last impediments most buyers pointed out when folks like Calxeda tried to market their wares into the data centers. Bit by bit, each attempt by each startup and each design outfit gets a little closer to a competitive product that might yet go up against the mighty Intel Xeon multi-core cpu.