The looming introduction of a 64-bit ARM-based server core (production 64-bit ARM server chips are expected from a variety of vendors later this year) also changes the economics of developing a server chip. While Moorhead believes building your own core is a multihundred million dollar process, Andrew Feldman, the corporate vice president and general manager […]
Since the rise of Amazon first as an online retailer, second as a Data Center innovator people have always been surprised by their success. They came to dominate in 2 fields somewhat loosely related to one another. But it’s the second wave of Amazon as a Data Cloud service provider that interests me the most. It’s one thing to provide a service, a whole other to provide an raw infrastructure that you are willing to loan out on an hourly basis. To me that is a really new New Thing and deserves some attention. Here now is Scientific Computing by the hour. Read On:
Amazon has a datacenter that they both use for their own internal commerce website, but also share out to anyone willing to pay hourly rates for access to the Amazon data cloud. Part of the whole constellation of services is a fault tolerant data storage (think a farm of hard drives all in racks) that will automatically detect problems and switchover to a different location without human intervention. Well that didn’t happen during an outage back in April on Amazon Web Services.