Carpet Bomberz Inc.

Scouring the technology news sites every day

Archive for the ‘computers’ Category

Cavium Thunder Rattles Xeon | EE Times

Cavium Booth

Cavium Booth (Photo credit: Interop Events)

Cavium will try to drive ARM SoCs into mainstream servers, challenging Intel’s Xeon x86 with a family of 28 nm devices using up to 48 2.5 GHz custom 64-bit ARM cores

via Cavium Thunder Rattles Xeon | EE Times.

Another entry into the massively multi-core low power server race. Since the fading of other competitors like Calxeda, SeaMicro there hasn’t been a lot of announcements or shipping products that promised to be the low-power vendor of choice. Each time an inventor or entrepreneur stepped up with a lower power or more core device, Intel would kind of blunt the advantage by doing a benchmark and claiming shutting cores off saves more power than using an inherently low power design. The race today as designed by Intel is race to sleep and that’s the benchmark by which they are measuring their own progress in the low power massively multi-core cpu market. However now Cavium is stepping up with an ARM based cpu with 48 cores. So let’s find out what we can about this new chip from this EE Times article.

It appears the manufacturing partner for this new product is Gigabyte who are creating a 2-socket motherboard for the 48-core ARM based CPU. The 48-core cpu is ARMv.8 based and addresses 64bits, so large amounts of RAM can be used with this architecture (a failing of past products from previous manufacturers attempting ARM based servers). Cavium has network processors in the market already using MIPS based CPUs and this new architecture using ARM based chips tries to leverage a lot of their expertise in the network processor market. Architecturally the motherboard interfaces and protocols are still in place, with only a cpu swap being the most noticeable difference. To Cavium is primarily known as a network processor manufacturer, but this move could push them into large scale data cloud type applications, with a tight binding to network operations supplied by their existing network processor products. Dates are still a little hazy, with the end of the calendar year being the most likely time a product has been developed, tested, manufactured and shipped.

I’m so happy to see the pressure being kept up in this one niche of computing. I still think ARM-based CPUs with massive amounts of cores being a new growth area. Similarly the move to 64bits takes away one of the last impediments most buyers pointed out when folks like Calxeda tried to market their wares into the data centers. Bit by bit, each attempt by each startup and each design outfit gets a little closer to a competitive product that might yet go up against the mighty Intel Xeon multi-core cpu.

Written by Eric Likness

June 16, 2014 at 3:00 pm

Posted in cloud, computers

Tagged with ,

Testing, Testing: How Google And Amazon Can Help Make Websites Rock Solid – ReadWrite

English: Diagram showing overview of cloud com...

It’s not unprecedented: Google already offers a testing suite for Android apps, though that’s focused on making sure they run well on smartphones and tablets, not testing the cloud-based services they connect to. If Google added testing services for the websites and services those apps connect to, it would have an end-to-end lock on developing for both the Web and mobile.

via Testing, Testing: How Google And Amazon Can Help Make Websites Rock Solid – ReadWrite.

Load testing websites and web-apps is a market whose time has come. I know where I work we have Project group who has a guy who manages an installation of Silk as a load tester. Behind that is a little farm of old Latitude E6400s that he manages from the Silk console to point at whichever app is in development/QA/testing before it goes into production. Knowing there’s potential for a cloud-based tool for this makes me very, very interested.

As outsourcing goes, the Software as a Service (SaaS) or Platform as a Service (PaaS) or even Infrastructure as a Service (IaaS) categories are great as raw materials. But if there was just an app that I could login to, spin up some VMs install my load-test tool of choice and then manage them from my desktop, I would feel like I had accomplished something. Or failing that even just a toolkit for load testing with whatever tool du jour is already available (nothing is perfect that way) would be cool too. And better yet, if I could do that with an updated tool whenever I  needed to conduct a round of testing, the tool would take into account things like the Heart Bleed bug in a timely fashion. That’s the kind of benefit a cloud-based, centrally managed, centrally updated Load Test service could provide.

And now as Microsoft has just announced a partnership with Salesforce on their Azure cloud platform, things get even more interesting. Not only could you develop using an existing toolkit like Salesforce.com, but host it on more than one cloud platform (AWS or Azure) as your needs change. And I would hope this would include unit test, load test and the whole sweet suite of security auditing one would expect for a webapp (thereby helping prevent vulnerabilities like HeartBleed OpenSSL).

Enhanced by Zemanta

Written by Eric Likness

June 2, 2014 at 3:00 pm

Posted in cloud, google, support

Tagged with , ,

Microsoft Office applications barely used by many employees, new study shows – Techworld.com

The Microsoft Office Core Applications

The Microsoft Office Core Applications (Photo credit: Wikipedia)

After stripping out unnecessary licensing Office licenses, organisations were left with a hybrid environment, part cloud, part desktop Office.

via Microsoft Office applications barely used by many employees, new study shows – Techworld.com.

The Center IT outfit I work for is dumping as much on premise Exchange Mailbox hosting as it can. However we are sticking with Outlook365 as provisioned by Microsoft (essentially an Outlook’d version of Hotmail). It has the calendar and global address list we all have come to rely on. But as this article goes into great detail on the rest of the Office Suite, people aren’t creating as many documents as they once did. We’re viewing them yes, but we just aren’t creating them.

I wonder how much of this is due in part to re-use or the assignment of duties to much higher top level people to become the authors. Your average admin assistant or even secretary doesn’t draft anything dictated to them anymore. The top level types now generally would be embarrassed to dictate something out to anyone. Plus the culture of secrecy necessitates more 1-to-1 style communications. And long form writing? Who does that anymore? No one writes letters, they write brief email or even briefer text, Tweets or Facebook updates. Everything is abbreviated to such a degree you don’t need thesaurus, pagination, or any of the super specialized doo-dads and add-ons we all begged M$ and Novell to add to their première word processors back in the day.

From an evolutionary standpoint, we could get by with the original text editors first made available on timesharing systems. I’m thinking of utilities like line editors (that’s really a step backwards, so I’m being really facetious here). The point I’m making is we’ve gone through a very advanced stage in the evolution of our writing tool of choice and it became a monopoly. WordPerfect lost out and fell by the wayside. Primary, Secondary and Middle Schools across the U.S. adopted M$ Word. They made it a requirement. Every college freshman has been given discounts to further the loyalty to the Office Suite. Now we don’t write like we used to, much less read. What’s the use of writing something so long in pages, no one will ever read it? We’ve jumped the shark of long form writing, and therefore the premiere app, the killer app for the desktop computer is slowly receding behind us as we keep speeding ahead. Eventually we’ll see it on the horizon, it’s sails being the last visible part, the crow’s nest, then poof! It will disappear below the horizon line. We’ll be left with our nostalgic memories of the first time we used MS Word.

Enhanced by Zemanta

Written by Eric Likness

May 19, 2014 at 3:00 pm

Posted in cloud, computers, google, wintel

Tagged with , ,

DDR4 Heir-Apparent Makes Progress | EE Times

The first DDR4 memory module was manufactured ...

The first DDR4 memory module was manufactured by Samsung and announced in January 2011. (Photo credit: Wikipedia)

The current paradigm has become increasingly complex, said Black, and HMC is a significant shift. It uses a vertical conduit called through-silicon via (TSV) that electrically connects a stack of individual chips to combine high-performance logic with DRAM die. Essentially, the memory modules are structured like a cube instead of being placed flat on a motherboard. This allows the technology to deliver 15 times the performance of DDR3 at only 30% of the power consumption.

via DDR4 Heir-Apparent Makes Progress | EE Times.

Even though DDR4 memory modules have been around in quantity for a short time, people are resistant to change. And the need for speed, whether it’s SSD’s stymied by SATA-2 data throughput or being married to DDR4 ram modules, is still pretty constant. But many manufacturers and analysts wonder aloud, “isn’t this speed good enough?”. That is true to an extent, the current OSes and chipset/motherboard manufacturers are perfectly happy cranking out product supporting the current state of the art. But know one wants to be the first to continue to push the ball of compute speed down the field. At least this industry group is attempting to get a plan in place for the next gen DDR memory modules. With any luck this spec will continue to evolve and sampled products will be sent ’round for everyone to review.

Given changes/advances in the storage and CPUs (PCIe SSDs, and 15 core Xeons), eventually a wall will be hit in compute per watt or raw I/O. Desktops will eventually benefit from any speed increases, but it will take time. We won’t see 10% better with each generation of hardware. Prices will need to come down before any of the mainstream consumer goods manufacturers adopt these technologies. But as previous articles have stated the “time to idle” measurement (which laptops and mobile devices strive to achieve) might be reason enough for the tablet or laptop manufacturers to push the state of the art and adopt these technologies faster than desktops.

Enhanced by Zemanta

Written by Eric Likness

March 27, 2014 at 3:00 pm

AnandTech | Testing SATA Express And Why We Need Faster SSDs

PCIe- und PCI-Slots im Vergleich

PCIe- und PCI-Slots im Vergleich (Photo credit: Wikipedia)

Even a PCIe 2.0 x2 link offers about a 40% increase in maximum throughput over SATA 6Gbps. Like most interfaces, PCIe 2.0 isn’t 100% efficient and based on our internal tests the bandwidth efficiency is around 78-79%, so in the real world you should expect to get ~780MB/s out of a PCIe 2.0 x2 link, but remember that SATA 6Gbps isn’t 100% either (around 515MB/s is the typical maximum we see). The currently available PCIe SSD controller designs are all 2.0 based but we should start to see some PCIe 3.0 drives next year. We don’t have efficiency numbers for 3.0 yet but I would expect to see nearly twice the bandwidth of 2.0, making +1GB/s a norm.

via AnandTech | Testing SATA Express And Why We Need Faster SSDs.

As I’ve watched the SSD market slowly grow and bloom it does seem as though the rate at which big changes occur has slowed. The SATA controllers on the drives themselves were kicked up a notch as the transition from SATA-1 to SATA-2 gave us consistent 500MB/sec read/write speeds. And that has stayed stable forever due to the inherent limit of SATA-2. I had been watching very closely developments in PCIe based SSDs but the prices were  always artificially high due to the market for these devices being data centers. Proof positive of this is Fusion-io catered mostly to two big purchasers of their product, Facebook and Apple. Subsequently their prices always put them in the enterprise level $15K for one PCIe slot device (at any size/density of storage).

Apple has come to the rescue in every sense of the word by adopting PCIe SSDs as the base level SSD for their portable computers. Starting last Summer 2013 Apple started released Mac Book Pro laptops with PCIe SSDs and then eventually started designing them into the Mac Book Air as well. The last step was to fully adopt it in their desktop Mac Pro (which has been slow to hit the market). The performance of the PCIe SSD in the Mac Pro as compared to any other shipping computer is the highest for a consumer level product. As the Mac gains some market share for all computers being shipped, Mac buyers are gaining more speed from their SSD as well.

So what further plans are in the works for the REST of the industry? Well SATA-express seems to be a way forward for the 90% of the market still buying Windows PCs. And it’s a new standard being put forth by the SATA-IO standards committee. With any luck the enthusiast market motherboard manufacturers will adopt it as fast as it passes the committees, and we’ll see an Anandtech or Tom’s Hardware guide review doing a real benchmark and analysis of how well it matches up against the previous generation hardware.

Enhanced by Zemanta

Written by Eric Likness

March 20, 2014 at 3:00 pm

SanDisk Crams 128GB on microSD Card: A World First

English: A 512 MB Kingston microSD card next t...

English: A 512 MB Kingston microSD card next to a Patriot SD adapter (left) and miniSD adapter (middle). (no original description) (Photo credit: Wikipedia)

This week during Mobile World Congress 2014, SanDisk introduced the world’s highest capacity microSDXC memory card, weighing a hefty 128 GB. That’s a huge leap in storage compared to the 128 MB microSD card launched 10 years ago.

via SanDisk Crams 128GB on microSD Card: A World First.

Amazing to think how small the form factor and how large the storage size has gotten with microSD format memory cards. I remember the introduction of SDXC cards and the jump from 32GB to 64GB flash SD sized cards. It didn’t take long after that before the SDXC format shrunk down to microSD format. Given the size and the options to expand the memory on certain devices (noticeably Apple is absent from this group), the size of the memory card is going to allow a lot longer timeline for the storage of pictures, music and video on our handheld devices. Prior to this, you would have needed a much larger m2 or mSATA storage card to achieve this level of capacity. You would have needed to have a tablet or a netbook to plug-in those larger memory cards.

Now you can have 128GB at your disposal just by dropping $200 at Amazon. Once you’ve installed it on your Samsung Galaxy you’ve got what would be a complete upgrade to a much more expensive phone (especially if it was an iPhone). I also think a SDXC microSD card would lend itself for moving a large amount of data in a device like one of these hollowed out nickels: http://www.amazon.com/2gb-MicroSD-Bundle-Mint-Nickel/dp/B0036VLT28

My interest in this would be taking a cell phone overseas and going through U.S. Customs and Immigration where it’s been shown in the past they will hold onto devices for further screening. If I knew I could keep 128GB of storage hidden in a metal coin that passed through the baggage X-ray without issue, I would feel a greater sense of security. A card this size is practically as big as the current hard drive on my home computer and work laptops. It’s really a fundamental change in the portability of a large quantity of personal data outside the series of tubes called the Interwebs. Knowing that stash could be kept away from prying eyes or casual security of hosting providers would certainly give me more peace of mind.

Enhanced by Zemanta

Written by Eric Likness

March 10, 2014 at 3:00 pm

Posted in computers, flash memory, mobile, SSD

Tagged with ,

AMD Launches First ARM-based Server CPU | EE Times

Image representing AMD as depicted in CrunchBase

Image via CrunchBase

In addition, AMD is planning to contribute to the Open Compute Project with a new micro-server design that utilizes the Opteron A-series, along with other architecture specifications for motherboards that Facebook helped developed called “Group Hug,” an agnostic server board design that can support traditional x86 processors, as well as ARM chips.

via AMD Launches First ARM-based Server CPU | EE Times.

Kudos to Facebook as they still continue support for the Open Compute project which they spearheaded some years back to encourage more widespread expertise and knowledge of large scale data centers. This new charge is to allow a pick-and-choose, best of breed kind of design whereby a CPU is not a fixed quantity but can be chosen or changed like a hard drive or RAM module. And with the motherboard firmware remaining more or less consistent regardless of the CPU chosen. This would allow mass customization based solely on the best CPU for a given job (HTTP, DNS, Compute, Storage, etc). And the spare capacity might be allowed to erode a little so that any general CPU could be somewhat more aggressively scheduled while some of it’s former, less efficient services could be migrated to more specialist mobile CPUs on another cluster. Each CPU doing the set of protocols, services it inherently does best. This flies further in the face of always choosing general compute style CPUs and letting the software do most of the heavy lifting once the programming is completed.

Enhanced by Zemanta

Written by Eric Likness

March 6, 2014 at 3:00 pm

10 Reasons OpenCL Will Change Your Design Strategy & Market Position | EE Times

OpenCL logo

OpenCL logo (Photo credit: Wikipedia)

OpenCL is a breakthrough precisely because it enables developers to accelerate the real-time execution of their algorithms quickly and easily — particularly those that lend themselves to the considerable parallel processing capabilities of FPGAs (which yield superior compute densities and far better performance/Watt than CPU- and GPU-based solutions)

via 10 Reasons OpenCL Will Change Your Design Strategy & Market Position | EE Times.

There’s still a lot of untapped energy available with the OpenCL programming tools. Apple is still the single largest manufacturer who has adopted OpenCL through a large number of it’s products (OS and App software). And I know from reading about super computing on GPUs that some large scale hybrid CPU/GPU computers have been ranked worldwide (the Chinese Tiahne being the first and biggest example). This article from EETimes encourages anyone with a brackground in C programming to try and give it a shot, see what algorithms could stand to be accelerated using the resources on the motherboard alone. But being EETimes they are also touting the benefits of using FPGAs in the mix as well.

To date the low-hanging fruit for desktop PC makers and their peripheral designers and manufacturers has been to reuse the GPU as massively parallel co-processor where it makes sense. But as the EETimes writer emphasizes, FPGAs can be equal citizens too and might further provide some more flexible acceleration. Interest in the FPGA as a co-processor for desktop to higher end enterprise data center motherboards was brought to the fore by AMD back in 2006 with the Torrenza cpu socket. The hope back then was that giving a secondary specialty processor (at the time an FPGA) might prove to be a market no one had addressed up to that point. So depending on your needs and what extra processors you might have available on your motherboard, OpenCL might be generic enough going forward to get a boost from ALL the available co-processors on your motherboard.

Whether or not we see benefits at the consumer level desktop is very dependent on the OS level support for OpenCL. To date the biggest adopter of OpenCL has been Apple as they needed an OS level acceleration API for video intensive apps like video editing suites. Eventually Adobe recompiled some of its Creative Suite apps to take advantage of OpenCL on MacOS. On the PC side Microsoft has always had DirectX as its API for accelerating any number of different multimedia apps (for playback, editing) and is less motivated to incorporate OpenCL at the OS level. But that’s not to say a 3rd party developer who saw a benefit to OpenCL over DirectX couldn’t create their own plumbing and libraries and get a runtime package that used OpenCL to support their apps or anyone who wanted to license this as part of a larger package installer (say for a game or for a multimedia authoring suite).

For the data center this makes way more sense than for the desktop, as DirectX isn’t seen as a scientific computing or means of allowing a GPU to be used as a numeric accelerator for scientific calculations. In this context, OpenCL might be a nice, open and easy to adopt library for people working on compute farms with massive numbers of both general purpose cpus and GPUs handing off parts of a calculation to one another over the PCI bus or across CPU sockets on a motherboard. So everyone’s needs are going to vary and widely vary in some cases. But OpenCL might help make that variation more easily addressed by having a common library that would allow one to touch all the co-processors available when a computation is needing to be sped up. So keep an eye on OpenCL as a competitor to any GPGPU style API and library put out by either nVidia or AMD or Intel. OpenCL might help people bridge differences between these different manufacturers too.

Image representing AMD as depicted in CrunchBase

Image via CrunchBase

Enhanced by Zemanta

Written by Eric Likness

March 3, 2014 at 3:00 pm

Posted in computers, gpu, fpga

Tagged with , ,

Seagates LaCie touts a 25TB not a typo box o disks for your DESK • The Register

Image of a dismantled Seagate ST-225 harddisk....

Image of a dismantled Seagate ST-225 harddisk. 5¼″ MFM harddisk with a stepper actuator. Technical Data: Capacity: 21.4 MB Speed: 3600 rpm Average Seek Time: 65 ms Heads: 4 (Photo credit: Wikipedia)

Seagate subsidiary LaCie has launched a set of external storage boxes using a 5TB Seagate hard drive – even though disk maker Seagate hasn’t officially launched a 5TB part.

via Seagates LaCie touts a 25TB not a typo box o disks for your DESK • The Register.

There isn’t a whole lot in the way of activity when it comes to new designs and advances in spinning magnetic hard drives these days. The capacity wars have plateau’d around 4TB or so. The next big threshold to cross is either Shingled recording or HAMR (which uses a laser to heat the surface just prior to a write being committed to the disk). Due to the technical advances required and the adoption by a slightly smaller field of manufacturers (there’s not as many here as there was a while ago) the speed at which higher density devices hit the market has slowed. We saw 1TB and 2TB quickly show up one after the other, but slowly eventually the 3TB and 4TB drives followed. And usually they were priced at the high end premium part of the market. Now Seagate has stitched together a 5TB drive and LaCie is rushing it into a number of its own desktop and pro-sumer level products.

The assumption for now is Seagate has adopted the shingled recording method (which folds writing of blocks of data in an overlapping pattern to increase the density). We’ll see how well that design decision performs over the coming months as the early adopters and fanbois needing each and every last terabyte of storage they can get for their game roms, warez and film/music collections.

Enhanced by Zemanta

Written by Eric Likness

February 27, 2014 at 3:00 pm

Group Forms to Drive NVDIMM Adoption | EE Times

English: flash memory

English: flash memory (Photo credit: Wikipedia)

As NAND flash is supplemented over the next few years by new technologies with improved durability and the same performance as system memory, “we’ll be able to start thinking about building systems where memory and storage are combined into one entity,” he said. “This is the megachange to computer architecture that SNIA is looking at now and preparing the industry for when these new technologies happen.”

via Group Forms to Drive NVDIMM Adoption | EE Times.

More good news on the Ultradimm, non-volatile DIMM front, a group is forming to begin setting standards for a new form factor. To day SanDisk  are the only company known to have architected and manufactured a shipping non-volatile DIMM memory product and then under contract only to IBM for the X 6 Intel-based server line. SanDisk is not shipping this or under contract to make this to anyone else by all reports, but that’s not keeping its competitors from getting a new product into heavy sample and QA testing. We might begin seeing a rush of different products, with varying interconnects and form factors all of which claim to plug-in to a typical RAM DIMM slot on an Intel based motherboard. But as the article on the IBM Ultradimm indicates this isn’t simple 1:1 swap out of DIMMs for Ultradimms. You need heavy lifting and revisions done on firmware/bios level to take advantage of the Ultradimms populating your DIMM slots on the motherboard. This is not easy, nor is it cheap and as far as OS support goes, you may need to see if your OS of choice will also help speed the plow by doing caching, loading and storing of memory differently once it’s become “aware” of the Ultradimms on the motherboard.

Without the OS and firmware support you would be wasting your valuable money and time trying to get a real boost of using the Ultradimms off the shelf in your own randomly chosen Intel based servers. IBM’s X6 line is just hitting the market and has been sampled by some heavy hitting real-time financial trading data centers to double-check that claims made about speed and performance. IBM’s used this period to really make sure the product makes a difference worth whatever they plan on charging as a premium for the Ultradimm on customized orders for the X6. But knowing further down the line a group is at least attempting to organize and set standards means this can become a competitive market for a new memory form factor and EVERYONE may eventually be able to buy something like an Ultradimm if they need it for their data center server farm. It’s too early to tell where this will lead, but re-using the JEDEC DIMM connection interface is a good start. If Intel wanted to help accelerate this, their onboard memory controllers could also become less DRAM specific and more generalized as a memory controller for anything plugged into the DIMM slots on the motherboard. That might prove the final step in really opening the market for a wave of Ultradimm designers and manufacturers. Keep an eye on Intel and see where their chipset architecture and more specifically their memory controller road maps lead for future support of NVDIMM or similar technologies.

 

Enhanced by Zemanta

Written by Eric Likness

February 20, 2014 at 3:00 pm