SUMMARY: Microsoft has been experimenting with its own custom chip effort in order to make its data centers more efficient, and these chips aren’t centered around ARM-based cores, but rather FPGAs from Altera.
FPGAs for the win, at least for eliminating unnecessary Xeon CPUs for doing online analytic processing for the Bing Search service. MS are saying they can process the same amount of data with half the number of CPUs by offloading some of the heavy lifting from general purpose CPUs to specially programmed FPGAs tune to the MS algorithms to deliver up the best search results. For MS the cost of the data center will out, and if you can drop half of the Xeons in a data center you just cut your per transaction costs by half. That is quite an accomplishment these days of radical incrementalism when it comes to Data Center ops and DevOps. The Field Programmable Gate Array is known as a niche, discipline specific kind of hardware solution. But when flashed, and programmed properly and re-configured as workloads and needs change it can do some magical heavy lifting from a computing standpoint.
Specifically I’m thinking really repetitive loops or recursive algorithms that take forever to unwind and deliver a final result are things best done in hardware versus software. For Search Engines that might be the process used to determine the authority of a page in the rankings (like Google’s PageRank). And knowing you can further tune the hardware to fit the algorithm means you’ll spend less time attempting to do heavy lifting on the General CPU using really fast C/C++ code instead. In Microsoft’s plan that means less CPUs need to do the same amount of work. And better yet, if you determine a better algorithm for your daily batch processes, you can spin up a new hardware/circuit diagram and apply that to the compute cluster over time (and not have to do a pull and replace of large sections of the cluster). It will be interesting to see if Microsoft reports out any efficiencies in a final report, as of now this seems somewhat theoretical though it may have been tested at least in a production test bed of some sort using real data.
Yes, supercapacitors might be the key to electronic vehicles that’s true. They are used now in different capacities as backup power for different electronic equipment and in some industrial uses as backup to distribution equipment. I think a company pursuing this should also consider the products and work done by American Superconductor in Massachussetts (NYSE: AMSC). Superconducting wire paired up with a electric motors wound with the same wire and a bank of Supercapacitors could potentially be a killer app of these combined technologies. Doesn’t matter what the power source is (Fuel Cell vs. plug-in), but the whole drive train could be electric and be high performance as well.
Originally posted on Gigaom:
A couple years ago Tesla CEO Elon Musk offhandedly said that he thought it could be capacitors — rather than batteries — that might be the energy storage tech to deliver an important breakthrough for electric transportation. Tesla cars, of course, use lithium ion batteries for storing energy and providing power for their vehicles, but Musk is an engineer by nature, and he likes what ultracaps offer for electric cars: short bursts of high energy and very long lasting life cycles.
Capacitors are energy storage device like batteries, but they store energy in an electric field, instead of through a chemical reaction the way a battery does. A basic capacitor consists of two metal plates, or conductors, separated by an insulator, such as air or a film made of plastic, or ceramic. During charging, electrons accumulate on one conductor, and depart from the other.A bus using ultracapacitor tech from Maxwell…
View original 465 more words
If Hewlett-Packard (HPQ) founders Bill Hewlett and Dave Packard are spinning in their graves, they may be due for a break. Their namesake company is cooking up some awfully ambitious industrial-strength computing technology that, if and when it’s released, could replace a data center’s worth of equipment with a single refrigerator-size machine.
Memristor makes an appearance again as a potential memory technology for future computers. To date, flash memory has shown it can scale for a while far into the future. What benefit could there possibly be by adopting memristor? You might be able to put a good deal of it on the same die as the CPU for starters. Which means similar to Intel’s most recent i-Series CPUs with embedded graphics DRAM on the CPU, you could instead put an even larger amount of Memristor memory. Memristor is denser than DRAM and stays resident even after power is taken away from the circuit. Intel’s eDRAM scales up to 128MB on die, imagine how much Memristor memory might fit in the same space? The article states Memristor is 64-128 times denser than DRAM. I wonder if that also holds true from Intel’s embedded DRAM too? Even if it’s only 10x denser as compared to eDRAM, you could still fit 10x 128MB of Memristor memory embedded within a 4 core CPU socket. With that much available space the speed at which memory access needed to occur would solely be determined by the on chip bus speeds. No PCI or DRAM memory controller bus needed. Keep it all on die as much as possible and your speeds would scream along.
There are big downsides to adopting Memristor however. One drawback is how a CPU resets the memory on power down, when all the memory is non-volatile. The CPU now has to explicitly erase things on reset/shutdown before it reboots. That will take some architecture changes both on the hardware and software side. The article further states that even how programming languages use memory would be affected. Long term the promise of memristor is great, but the heavy lifting needed to accommodate the new technology hasn’t been done yet. In an effort to help speed the plow on this evolution in hardware and software, HP is enlisting the Open Source community. It’s hoped that some standards and best practices can slowly be hashed out as to how Memristor is accessed, written to and flushed by the OS, schedulers and apps. One possible early adopter and potential big win would be the large data center owners and Cloud operators.
In memory caches and databases are the bread and butter of the big hitters in Cloud Computing. Memristor might be adapted to this end as a virtual disk made up of memory cells on which a transaction log was written. Or could be pointed to by OS to be treated as a raw disk of sorts, only much faster. By the time the Cloud provider’s architects really optimized their infrastructure for Memristor, there’s no telling how flat the memory hierarchy could become. Today it’s a huge chain of higher and higher speed caches attached to spinning drives at the base of the pyramid. Given higher density like Memristor and physical location closer to the CPU core, one might eliminate a storage tier altogether for online analytical systems. Spinning drives might be relegated to the task of being storage tape replacements for less accessed, less hot data. HP’s hope is to deliver a computer optimized for Memristor (called “The Machine” in this article) by 2019 where Cache, Memory and Storage are no longer so tightly defined and compartmentalized. With any luck this will be a shipping product and will perform at the level they are predicting.
I manage iTunes U at the University where I work. I’m glad to know Apple keeps on working on it, adding features and fixing bugs. It now seems even more useful as a lightweight Course Management System. Had they tried doing this from the earliest days, it’s likely they could have gained some loyal fan base that eventually bought into Blackboard, Angel, Canvas etc.
Originally posted on TechCrunch:
Apple has updated iTunes U with a bunch of new features, cranking the version number up to 2 and introducing improved discussion features and the ability to create and update courses directly from the iPad app, which previously has been mostly a user-facing client for consuming content.
The update, live now in the App Store, gives the universal app new powers for students, letting them ask questions on the course, posts and assignments more easily in private courses, and allows other students to participate in discussion directly by asking follow-on or supplemental questions, or by answering questions posed by other students. Push notifications also now alert users about new questions and responses to discussion in process.
For teachers in particular, it’s a big update, since it now allow them to set up their iPads from their devices. They can provide course outlines, create assignments, put out class materials and track…
View original 138 more words
Although Intel’s SSD DC P3700 is clearly targeted at the enterprise, the drive will be priced quite aggressively at $3/GB. Furthermore, Intel will be using the same controller and firmware architecture in two other, lower cost derivatives (P3500/P3600). In light of Intel’s positioning of the P3xxx family, a number of you asked for us to run the drive through our standard client SSD workload. We didn’t have the time to do that before Computex, but it was the first thing I did upon my return. If you aren’t familiar with the P3700 I’d recommend reading the initial review, but otherwise let’s look at how it performs as a client drive.
This is Part #2 of the full review Anandtech did on the Intel P3700 PCIe/NVMe card. It’s reassuring to know that Anandtech reports Intel’s got more than just the top end P3700 coming out on the market. Other price points will be competing too for the non-enterprise workload types. $3/GB puts it at the top of a desktop peripheral price for even a fanboy gamer. But for data center workloads and the prices that crowd pays this is going to be an easy choice. Intel’s P3700 as the Anandtech concludes is built not just for speed (peak I/O) but for consistency at all queue depths, file sizes and block sizes. If you’re attempting to budget a capital improvement in your Data Center and you want to quote the increases you’ll see, these benchmarks will be proof enough that you’ll get every penny back that you spent. No need to throw an evaluation unit into your test rig, testing lab and benchmarking it yourself.
As for the lower end models, you might be able to dip your toe, though not at the same performance level, in at the $600 price point. That will be an average to smallish 400GB PCIe card the Intel SSD DC P3500. But still the overall design and engineering is derived in part from the move from just a straight PCIe interface to one that harnesses more data lanes on the PCIe bus and connects to the BIOS via the NVMHCI drive interface. That’s what you’re getting for that price. If you’re very sensitive to price, do not purchase this product line. Samsung has you more than adequately covered under the old regime SSD-SATA drive technology. And even then the performance is nothing to sneeze at. But do know things are in flux with the new higher performance drive interfaces manufacturers will be marketing and selling to you soon. Remember roughly this is the order in which things are improving and of higher I/O:
NVMe/NVMHCI>PCIe SSD>M.2>SATA Express (SATAe)>SATA SSD
And the incremental differences in the middle are small enough that you will only see benefits really if the price is cheaper for a slightly faster interface (say SATA SSD vs. SATA Express choose based on the price being dead equal, not necessarily just performance alone). Knowing what all these things do or even just what they mean and how that equates to your computer’s I/O performance will help you choose wisely over the next year to two years.
Gmail API, even just the name harkens back to 1991 era when Microsoft first bought out Consumer Softwares (http://en.wikipedia.org/wiki/Consumers_Software) and spun up it’s own mail server: Microsoft Mail for PC Networking (http://en.wikipedia.org/wiki/Microsoft_Mail). The underlying architecture was the almighty MAPI which eventually got all the corporate buyers and accounts hooked on Exchange Mail with MS Outlook clients. And the world has never been the same since. Let that be both an interesting factotum but a cautionary tale too. The first taste is ALWAYS free.
Originally posted on TechCrunch:
Yesterday, at Google’s I/O developer conference, the company announced a new way for developers to build apps that integrate with Gmail, via its brand-new Gmail API. Designed to allow programmatic access to messages, threads, labels and drafts, the API was initially misunderstood by some as Google’s attempt to “kill off IMAP,” an older email protocol that offers email access, retrieval and storage.
That confusion seemed to come about largely because of the wording in one highly trafficked Wall St. Journal article, which originally said that the new API would “replace IMAP, a common but complex way for applications to communicate with most email services.” (The article has since been updated with new language that says “instead of” as opposed to “replace.”)
Google’s developer’s documentation also backs this up: the new Gmail API will not be killing off IMAP – at least, not yet – but it will make Gmail…
View original 300 more words
Fully agree, after the number of moves I went through prior to moving in with my girlfriend, I had pared down quite a bit. All my “stuff” could fit into my car. Which is exactly as it should be. If it can fit in your car, then you truly own it, it is YOURS.
Originally posted on Jon Udell:
As we clear out the house in order to move west, we’re processing a vast accumulation of things. This morning I hauled another dozen boxes of books from the attic, nearly all of which we’ll donate to the library. Why did I haul them up there in the first place? We brought them from our previous house, fourteen years ago. I could have spared myself a bunch of trips up and down the stairs by taking them directly to the library back then. But in 2000 we were only in the dawn of the era of dematerialization. You couldn’t count on being able to find a book online, search inside it, have a used copy shipped to you in a couple of days for a couple of dollars.
Now I am both shocked and liberated to realize how few things matter to me. I joke that all I really…
View original 108 more words