Carpet Bomberz Inc.

Scouring the technology news sites every day

Archive for the ‘computers’ Category

Cavium Thunder Rattles Xeon | EE Times

Cavium Booth

Cavium Booth (Photo credit: Interop Events)

Cavium will try to drive ARM SoCs into mainstream servers, challenging Intel’s Xeon x86 with a family of 28 nm devices using up to 48 2.5 GHz custom 64-bit ARM cores

via Cavium Thunder Rattles Xeon | EE Times.

Another entry into the massively multi-core low power server race. Since the fading of other competitors like Calxeda, SeaMicro there hasn’t been a lot of announcements or shipping products that promised to be the low-power vendor of choice. Each time an inventor or entrepreneur stepped up with a lower power or more core device, Intel would kind of blunt the advantage by doing a benchmark and claiming shutting cores off saves more power than using an inherently low power design. The race today as designed by Intel is race to sleep and that’s the benchmark by which they are measuring their own progress in the low power massively multi-core cpu market. However now Cavium is stepping up with an ARM based cpu with 48 cores. So let’s find out what we can about this new chip from this EE Times article.

It appears the manufacturing partner for this new product is Gigabyte who are creating a 2-socket motherboard for the 48-core ARM based CPU. The 48-core cpu is ARMv.8 based and addresses 64bits, so large amounts of RAM can be used with this architecture (a failing of past products from previous manufacturers attempting ARM based servers). Cavium has network processors in the market already using MIPS based CPUs and this new architecture using ARM based chips tries to leverage a lot of their expertise in the network processor market. Architecturally the motherboard interfaces and protocols are still in place, with only a cpu swap being the most noticeable difference. To Cavium is primarily known as a network processor manufacturer, but this move could push them into large scale data cloud type applications, with a tight binding to network operations supplied by their existing network processor products. Dates are still a little hazy, with the end of the calendar year being the most likely time a product has been developed, tested, manufactured and shipped.

I’m so happy to see the pressure being kept up in this one niche of computing. I still think ARM-based CPUs with massive amounts of cores being a new growth area. Similarly the move to 64bits takes away one of the last impediments most buyers pointed out when folks like Calxeda tried to market their wares into the data centers. Bit by bit, each attempt by each startup and each design outfit gets a little closer to a competitive product that might yet go up against the mighty Intel Xeon multi-core cpu.

Written by Eric Likness

June 16, 2014 at 3:00 pm

Posted in cloud, computers

Tagged with ,

Testing, Testing: How Google And Amazon Can Help Make Websites Rock Solid – ReadWrite

English: Diagram showing overview of cloud com...

It’s not unprecedented: Google already offers a testing suite for Android apps, though that’s focused on making sure they run well on smartphones and tablets, not testing the cloud-based services they connect to. If Google added testing services for the websites and services those apps connect to, it would have an end-to-end lock on developing for both the Web and mobile.

via Testing, Testing: How Google And Amazon Can Help Make Websites Rock Solid – ReadWrite.

Load testing websites and web-apps is a market whose time has come. I know where I work we have Project group who has a guy who manages an installation of Silk as a load tester. Behind that is a little farm of old Latitude E6400s that he manages from the Silk console to point at whichever app is in development/QA/testing before it goes into production. Knowing there’s potential for a cloud-based tool for this makes me very, very interested.

As outsourcing goes, the Software as a Service (SaaS) or Platform as a Service (PaaS) or even Infrastructure as a Service (IaaS) categories are great as raw materials. But if there was just an app that I could login to, spin up some VMs install my load-test tool of choice and then manage them from my desktop, I would feel like I had accomplished something. Or failing that even just a toolkit for load testing with whatever tool du jour is already available (nothing is perfect that way) would be cool too. And better yet, if I could do that with an updated tool whenever I  needed to conduct a round of testing, the tool would take into account things like the Heart Bleed bug in a timely fashion. That’s the kind of benefit a cloud-based, centrally managed, centrally updated Load Test service could provide.

And now as Microsoft has just announced a partnership with Salesforce on their Azure cloud platform, things get even more interesting. Not only could you develop using an existing toolkit like Salesforce.com, but host it on more than one cloud platform (AWS or Azure) as your needs change. And I would hope this would include unit test, load test and the whole sweet suite of security auditing one would expect for a webapp (thereby helping prevent vulnerabilities like HeartBleed OpenSSL).

Enhanced by Zemanta

Written by Eric Likness

June 2, 2014 at 3:00 pm

Posted in cloud, google, support

Tagged with , ,

Microsoft Office applications barely used by many employees, new study shows – Techworld.com

The Microsoft Office Core Applications

The Microsoft Office Core Applications (Photo credit: Wikipedia)

After stripping out unnecessary licensing Office licenses, organisations were left with a hybrid environment, part cloud, part desktop Office.

via Microsoft Office applications barely used by many employees, new study shows – Techworld.com.

The Center IT outfit I work for is dumping as much on premise Exchange Mailbox hosting as it can. However we are sticking with Outlook365 as provisioned by Microsoft (essentially an Outlook’d version of Hotmail). It has the calendar and global address list we all have come to rely on. But as this article goes into great detail on the rest of the Office Suite, people aren’t creating as many documents as they once did. We’re viewing them yes, but we just aren’t creating them.

I wonder how much of this is due in part to re-use or the assignment of duties to much higher top level people to become the authors. Your average admin assistant or even secretary doesn’t draft anything dictated to them anymore. The top level types now generally would be embarrassed to dictate something out to anyone. Plus the culture of secrecy necessitates more 1-to-1 style communications. And long form writing? Who does that anymore? No one writes letters, they write brief email or even briefer text, Tweets or Facebook updates. Everything is abbreviated to such a degree you don’t need thesaurus, pagination, or any of the super specialized doo-dads and add-ons we all begged M$ and Novell to add to their première word processors back in the day.

From an evolutionary standpoint, we could get by with the original text editors first made available on timesharing systems. I’m thinking of utilities like line editors (that’s really a step backwards, so I’m being really facetious here). The point I’m making is we’ve gone through a very advanced stage in the evolution of our writing tool of choice and it became a monopoly. WordPerfect lost out and fell by the wayside. Primary, Secondary and Middle Schools across the U.S. adopted M$ Word. They made it a requirement. Every college freshman has been given discounts to further the loyalty to the Office Suite. Now we don’t write like we used to, much less read. What’s the use of writing something so long in pages, no one will ever read it? We’ve jumped the shark of long form writing, and therefore the premiere app, the killer app for the desktop computer is slowly receding behind us as we keep speeding ahead. Eventually we’ll see it on the horizon, it’s sails being the last visible part, the crow’s nest, then poof! It will disappear below the horizon line. We’ll be left with our nostalgic memories of the first time we used MS Word.

Enhanced by Zemanta

Written by Eric Likness

May 19, 2014 at 3:00 pm

Posted in cloud, computers, google, wintel

Tagged with , ,

DDR4 Heir-Apparent Makes Progress | EE Times

The first DDR4 memory module was manufactured ...

The first DDR4 memory module was manufactured by Samsung and announced in January 2011. (Photo credit: Wikipedia)

The current paradigm has become increasingly complex, said Black, and HMC is a significant shift. It uses a vertical conduit called through-silicon via (TSV) that electrically connects a stack of individual chips to combine high-performance logic with DRAM die. Essentially, the memory modules are structured like a cube instead of being placed flat on a motherboard. This allows the technology to deliver 15 times the performance of DDR3 at only 30% of the power consumption.

via DDR4 Heir-Apparent Makes Progress | EE Times.

Even though DDR4 memory modules have been around in quantity for a short time, people are resistant to change. And the need for speed, whether it’s SSD’s stymied by SATA-2 data throughput or being married to DDR4 ram modules, is still pretty constant. But many manufacturers and analysts wonder aloud, “isn’t this speed good enough?”. That is true to an extent, the current OSes and chipset/motherboard manufacturers are perfectly happy cranking out product supporting the current state of the art. But know one wants to be the first to continue to push the ball of compute speed down the field. At least this industry group is attempting to get a plan in place for the next gen DDR memory modules. With any luck this spec will continue to evolve and sampled products will be sent ’round for everyone to review.

Given changes/advances in the storage and CPUs (PCIe SSDs, and 15 core Xeons), eventually a wall will be hit in compute per watt or raw I/O. Desktops will eventually benefit from any speed increases, but it will take time. We won’t see 10% better with each generation of hardware. Prices will need to come down before any of the mainstream consumer goods manufacturers adopt these technologies. But as previous articles have stated the “time to idle” measurement (which laptops and mobile devices strive to achieve) might be reason enough for the tablet or laptop manufacturers to push the state of the art and adopt these technologies faster than desktops.

Enhanced by Zemanta

Written by Eric Likness

March 27, 2014 at 3:00 pm

AnandTech | Testing SATA Express And Why We Need Faster SSDs

PCIe- und PCI-Slots im Vergleich

PCIe- und PCI-Slots im Vergleich (Photo credit: Wikipedia)

Even a PCIe 2.0 x2 link offers about a 40% increase in maximum throughput over SATA 6Gbps. Like most interfaces, PCIe 2.0 isn’t 100% efficient and based on our internal tests the bandwidth efficiency is around 78-79%, so in the real world you should expect to get ~780MB/s out of a PCIe 2.0 x2 link, but remember that SATA 6Gbps isn’t 100% either (around 515MB/s is the typical maximum we see). The currently available PCIe SSD controller designs are all 2.0 based but we should start to see some PCIe 3.0 drives next year. We don’t have efficiency numbers for 3.0 yet but I would expect to see nearly twice the bandwidth of 2.0, making +1GB/s a norm.

via AnandTech | Testing SATA Express And Why We Need Faster SSDs.

As I’ve watched the SSD market slowly grow and bloom it does seem as though the rate at which big changes occur has slowed. The SATA controllers on the drives themselves were kicked up a notch as the transition from SATA-1 to SATA-2 gave us consistent 500MB/sec read/write speeds. And that has stayed stable forever due to the inherent limit of SATA-2. I had been watching very closely developments in PCIe based SSDs but the prices were  always artificially high due to the market for these devices being data centers. Proof positive of this is Fusion-io catered mostly to two big purchasers of their product, Facebook and Apple. Subsequently their prices always put them in the enterprise level $15K for one PCIe slot device (at any size/density of storage).

Apple has come to the rescue in every sense of the word by adopting PCIe SSDs as the base level SSD for their portable computers. Starting last Summer 2013 Apple started released Mac Book Pro laptops with PCIe SSDs and then eventually started designing them into the Mac Book Air as well. The last step was to fully adopt it in their desktop Mac Pro (which has been slow to hit the market). The performance of the PCIe SSD in the Mac Pro as compared to any other shipping computer is the highest for a consumer level product. As the Mac gains some market share for all computers being shipped, Mac buyers are gaining more speed from their SSD as well.

So what further plans are in the works for the REST of the industry? Well SATA-express seems to be a way forward for the 90% of the market still buying Windows PCs. And it’s a new standard being put forth by the SATA-IO standards committee. With any luck the enthusiast market motherboard manufacturers will adopt it as fast as it passes the committees, and we’ll see an Anandtech or Tom’s Hardware guide review doing a real benchmark and analysis of how well it matches up against the previous generation hardware.

Enhanced by Zemanta

Written by Eric Likness

March 20, 2014 at 3:00 pm

SanDisk Crams 128GB on microSD Card: A World First

English: A 512 MB Kingston microSD card next t...

English: A 512 MB Kingston microSD card next to a Patriot SD adapter (left) and miniSD adapter (middle). (no original description) (Photo credit: Wikipedia)

This week during Mobile World Congress 2014, SanDisk introduced the world’s highest capacity microSDXC memory card, weighing a hefty 128 GB. That’s a huge leap in storage compared to the 128 MB microSD card launched 10 years ago.

via SanDisk Crams 128GB on microSD Card: A World First.

Amazing to think how small the form factor and how large the storage size has gotten with microSD format memory cards. I remember the introduction of SDXC cards and the jump from 32GB to 64GB flash SD sized cards. It didn’t take long after that before the SDXC format shrunk down to microSD format. Given the size and the options to expand the memory on certain devices (noticeably Apple is absent from this group), the size of the memory card is going to allow a lot longer timeline for the storage of pictures, music and video on our handheld devices. Prior to this, you would have needed a much larger m2 or mSATA storage card to achieve this level of capacity. You would have needed to have a tablet or a netbook to plug-in those larger memory cards.

Now you can have 128GB at your disposal just by dropping $200 at Amazon. Once you’ve installed it on your Samsung Galaxy you’ve got what would be a complete upgrade to a much more expensive phone (especially if it was an iPhone). I also think a SDXC microSD card would lend itself for moving a large amount of data in a device like one of these hollowed out nickels: http://www.amazon.com/2gb-MicroSD-Bundle-Mint-Nickel/dp/B0036VLT28

My interest in this would be taking a cell phone overseas and going through U.S. Customs and Immigration where it’s been shown in the past they will hold onto devices for further screening. If I knew I could keep 128GB of storage hidden in a metal coin that passed through the baggage X-ray without issue, I would feel a greater sense of security. A card this size is practically as big as the current hard drive on my home computer and work laptops. It’s really a fundamental change in the portability of a large quantity of personal data outside the series of tubes called the Interwebs. Knowing that stash could be kept away from prying eyes or casual security of hosting providers would certainly give me more peace of mind.

Enhanced by Zemanta

Written by Eric Likness

March 10, 2014 at 3:00 pm

Posted in computers, flash memory, mobile, SSD

Tagged with ,

AMD Launches First ARM-based Server CPU | EE Times

Image representing AMD as depicted in CrunchBase

Image via CrunchBase

In addition, AMD is planning to contribute to the Open Compute Project with a new micro-server design that utilizes the Opteron A-series, along with other architecture specifications for motherboards that Facebook helped developed called “Group Hug,” an agnostic server board design that can support traditional x86 processors, as well as ARM chips.

via AMD Launches First ARM-based Server CPU | EE Times.

Kudos to Facebook as they still continue support for the Open Compute project which they spearheaded some years back to encourage more widespread expertise and knowledge of large scale data centers. This new charge is to allow a pick-and-choose, best of breed kind of design whereby a CPU is not a fixed quantity but can be chosen or changed like a hard drive or RAM module. And with the motherboard firmware remaining more or less consistent regardless of the CPU chosen. This would allow mass customization based solely on the best CPU for a given job (HTTP, DNS, Compute, Storage, etc). And the spare capacity might be allowed to erode a little so that any general CPU could be somewhat more aggressively scheduled while some of it’s former, less efficient services could be migrated to more specialist mobile CPUs on another cluster. Each CPU doing the set of protocols, services it inherently does best. This flies further in the face of always choosing general compute style CPUs and letting the software do most of the heavy lifting once the programming is completed.

Enhanced by Zemanta

Written by Eric Likness

March 6, 2014 at 3:00 pm