Categories
fpga technology

Xilinx Introduces SDNet & ‘Softly’ Defined Networks | EE Times

Image representing Xilinx as depicted in Crunc...
Image via CrunchBase

It’s not often that you see something that makes you think “this is a game changer.” The introduction of logic synthesis circa 1990 was one such event; today’s introduction of SDNet from Xilinx may well be another.

via Xilinx Introduces SDNet & ‘Softly’ Defined Networks | EE Times.

Cisco has used different RISC chips over the years as its network processors. Both in it’s network closet switches and the core router chassis. First generation was based on the venerable MIPS processor, then subsequently they migrated to PowerPC, both for power reduced processing but also network optimized cpus. Cisco’s engineers would accommodate changes in function by releasing new version of the IOS. Or they would release new line cards for the big multi-slot router chassis. Between software and hardware releases they would cover the whole spectrum of wired, wireless, optical networking. It was a rich mix of what could be done.

Enter now the possibility of not just Software Defined Networking (kind of like using Virtual Machines instead of physical switches), but software defined firmware/hardware. FPGAs (field programmable gate arrays) are the computing world’s reconfigurable processor. So instead of provisioning a fixed network processor, and virtualizing on top of that to gain the software defined network, what if you could work the problem from both ends? Reconfigure the software AND the network processor. That’s what Xilinx is proposing with this announcement of SDNet. The prime example given in this announcement is the line card that would slot into a a large router chassis (some Cisco gear comes with 13 slots). If you had just a bunch of ports, let’s say RJ-45 facing outward, what then happens on the inside via the software/hardware reconfigurability would astound you. You want Fibre Channel over Ethernet? You want 10Gbit? You want SIP traffic only? You don’t buy a line card per application because it’s set in stone what the function is. You tell the SDNet compiler these are the inputs, these are the outputs, please optimize the functions and reconfigure the firmware as needed.

Once programmed, that line card does what you tell it to do. It can inspect packets, it could act as a firewall, it could prioritize traffic, shape bandwidth or just simple route things as fast as it could possibly go. Doesn’t matter what signals are running over what pins, as long as it knows it’s RJ-45 connectors, it will do the rest. Amazing when you think about it that way.

Enhanced by Zemanta
Categories
technology

Maxeler FPGA Project

Great posting by Lucas Szyrmer @ softwaretrading.co.uk, it’s a nice summary of the story from last month about JP Morgan Chase’s use of FPGAs to speed up some of their analysis for risk. And it goes into greater detail concerning the mechanics of how to translate what one has to do in software across the divide into something that can be turned in VHDL/Verilog and written into the FPGA itself. It is in a word, a ‘non-trivial’ task, and can take quite a long time to get working.

Software Trading


Lately, I’ve been exploring a little known corner of high performance computing (HPC) known as FPGAs. Turns out, it’s time to get electrical on yowass (Pulp Fiction reference intentional). You can program these chips in the field, thus speeding up processing speeds dramatically, relative to generic CPUs. It’s possible to customize functionality to very specific needs.

Why this works

The main benefit of FPGAs comes from reorganizing calculations. FPGAs work on a massively parallel basis. You get rid of bottlenecks in typical CPU design. While these bottlenecks are good for general purpose applications, like watching Pulp Fiction, they significantly slow down the amount of calculations that you do per second. In addition to being massively multi-parallel, FPGAs also are faster, according to FPGAdeveloper, because:

  • you aren’t competing with your operating system or applications like anti-virus for CPU cycle time
  • you run at a lower level than the OS, so you doing have…

View original post 427 more words

Categories
computers data center fpga

Maxeler Makes Waves With Dataflow Design – Digits – WSJ

In the dataflow approach, the chip or computer is essentially tailored for a particular program, and works a bit like a factory floor.

via Maxeler Makes Waves With Dataflow Design – Digits – WSJ.

English: Altera Stratix IV EP4SGX230 FPGA on a PCB
Image via Wikipedia

My supercomputer can beat your supercomputer, and money is no object. FPGAs (Field Programmable Gate Arrays) are used most often in prototyping new computer processors. You can design a chip, then ‘program’ the FPGA to match the circuit design so that it can be verified. Verification is the process by which you do exhaustive tests on the logic and circuits to see if you’ve left anything out or didn’t get the timing right for the circuits that may run at different speeds within the chip itself. They are expensive niche products that chip design outfits and occasionally product manufacturers use to solve problems. Less often they might be used in data network gear to help classify and reroute packets in a data center and optimize performance over time.

This by itself would be a pretty good roster of applications, but something near and dear to my heart is the use of FPGAs as a kind of reconfigurable processor. I am certain one day we will see the application of FPGA  in desktop computers. But until then, we’ll have to settle for using FPGAs as special purpose application accelerators in high volume trading and Wall Street type data centers. This article in WSJ is going to change a few opinions about the application of FPGAs for real computing tasks. The speedups quoted for different analysis and reports derived from the transactions show multiple orders of magnitude speedups. In extreme examples sometimes 1,000 times faster speed-ups occurred when using a fully optimized FPGA versus a general purpose CPU.

When someone can tout 1,000X speedups everyone is going to take notice. And hopefully it won’t be simply a bunch of copycats trying to speed up their reports and management dashboards. There’s a renaissance out there waiting to happen with FPGAs and I still have hope I’ll see it in my lifetime.

Categories
cloud computers data center flash memory SSD technology

EMC’s all-flash benediction: Turbulence ahead • The Register

msystems
Image via Wikipedia

A flash array controller needs: “An architecture built from the ground up around SSD technology that sizes cache, bandwidth, and processing power to match the IOPS that SSDs provide while extending their endurance. It requires an architecture designed to take advantage of SSDs unique properties in a way that makes a scalable all-SSD storage solution cost-effective today.”

via EMC’s all-flash benediction: Turbulence ahead • The Register.

I think that Storage Controllers are the point of differentiation now for the SSDs coming on the market today. Similarly the device that ties those SSDs into the comptuer and its OS are equally, nay more important. I’m thinking specifically about a product like the SandForce 2000 series SSD controllers. They more or less provide a SATA or SAS interface into a small array of flash memory chips that are made to look and act like a spinning hard drive. However, time is coming soon now where all those transitional conventions can just go away and a clean slate design can go forward. That’s why I’m such a big fan of the PCIe based flash storage products. I would love to see SandForce create a disk controller with one interface that speaks PCIe 2.0/3.0 and the other is just open to whatever technology Flash memory manufacturers are using today. Ideally then the Host Bus would always be a high speed PCI Express interface which could be licensed or designed from the ground up to speed I/O in and out of the Flash memory array. On the memory facing side it could be almost like an FPGA made to order according to the features, idiosyncrasies of any random Flash Memory architecture that is shipping at the time of manufacture. Same would apply for any type of error correction and over-provisioning for failed memory cells as the SSD ages through multiple read/write cycles.

In this article I quoted at the top from The Register, the big storage array vendors are attempting to market new products by adding Flash memory to either one component of the whole array product or in the case off EMC the whole product uses Flash memory based SSDs throughout. That more aggressive approach has seemed to be overly cost prohibitive given the manufacturing cost of large capacity commodity hard drives. But they problem is, in the market where these vendors compete, everyone pays an enormous price premium for the hard drives, storage controllers, cabling and software that makes it all work. Though the hard drive might be cheaper to manufacture, the storage array is not and that margin is what makes Storage Vendors a very profitable business to be in. As stated last week in the benchmark comparisons of High Throughput storage arrays, Flash based arrays are ‘faster’ per dollar than a well designed, engineered top-of-the-line hard drive based storage array from IBM. So for the segment of the industry that needs the throughput more than the total space, EMC will likely win out. But Texas Memory Systems (TMS) is out there too attempting to sign up OEM contracts with folks attempting to sell into the Storage Array market. The Register does a very good job surveying the current field of vendors and manufacturers trying to look at which companies might buy a smaller company like TMS. But the more important trend being spotted throughout the survey is the decidedly strong move towards native Flash memory in the storage arrays being sold into the Enterprise market. EMC has a lead, that most will be following real soon now.

Categories
computers science & technology surveillance technology

Intel lets outside chip maker into its fabs • The Register

 

Banner image Achronix 22i
Intel and Achronix-2 Great tastes that taste great together

 

According to Greg Martin, a spokesman for the FPGA maker, Achronix can compete with Xilinx and Altera because it has, at 1.5GHz in its current Speedster1 line, the fastest such chips on the market. And by moving to Intel’s 22nm technology, the company could have ramped up the clock speed to 3GHz.

via Intel lets outside chip maker into its fabs • The Register.

That kind of says it all in one sentence, or two sentences in this case. The fastest FPGA on the market is quite an accomplishment unto itself. Putting that FPGA on the world’s most advanced production line and silicon wafter technology is what Andy Grove would called the 10X Effect. FPGA’s are reconfigurable processors that can have their circuits re-routed and optimized for different tasks over and over again. This is real beneficial for very small batches of processors where you need a custom design. Some of the things they can speed up is doing math or looking up things in a very large search through a database. In the past I was always curious whether they could be used a general purpose computer which could switch gears and optimize itself for different tasks. I didn’t know whether or not it would work or be worthwhile but it really seemed like there was a vast untapped reservoir of power in the FPGA.

Some super computer manufacturers have started using FPGAs as special purpose co-processors and have found immense speed-ups as a result. Oil prospecting companies have also used them to speed up analysis of seismic data and place good bets on dropping a well bore in the right spot. But price has always been a big barrier to entry as quoted in this article. $1,000 per chip is the cost. Which limits the appeal to those buyers where price is no object but speed and time are more important. The two big competitors in the field off FPGA manufacturing are Altix and Xilinx both of which design the chips but have them manufactured in other countries. This has led to FPGAs being second class citizens used older generation chip technologies on old manufacturing lines. They always had to deal with what they could get. Performance in terms of clock speed was always less too.

It was not unusual to see during the Megahertz and Gigahertz wars chip speeds increasing every month. FPGAs sped up too, but not nearly as fast. I remember seeing 200Mhz/sec and 400Mhz/sec touted as Xilinx and Altix top of the line products. With Achrnix running at 1.5Ghz, things have changed quite a bit. That’s a general purposed CPU speed in a completely customizable FPGA. This means you get speed that makes the FPGA even more useful. However, instead of going faster this article points out people would rather buy the same speed but use less electricity and generate less heat. There’s no better way to do this than to shrink the size of the circuits on the FPGA and that is the core philosophy of Intel Inc. They have just teamed up to put the Achronix FPGA on the smallest feature size production line using the most optimized, cost conscious manufacturer of silicon chips bar none.

Another point being made in the article is the market for FPGAs at this level of performance also tends to be more defense contract oriented. As a result, to maintain the level of security necessary to sell chips to this industry, the chips need to be made in the good ol’ USA and Intel doesn’t outsource anything when it comes to it’s top of the line production facilities. Everything is in Oregon, Arizona or Washington State and is guaranteed not to have any secret backdoors built in to funnel data to foreign governments.

I would love to see some University research projects start looking at FPGAs again and see if as speeds go up, power goes down if there’s a happy medium or mix of general purpose CPUs and FPGAs that might help the average joe working on his desktop, laptop or iPad. All I know is Intel entering a market will make it more competitive and hopefully lower the barrier of entry to anyone who would really like to get their hands on a useful processor that they can customize to their needs.

Categories
computers technology wintel

Intel Debuts New Atom System-on-Chip Processor

This is a an Altera Flex FPGA with 20,000 cell...
Image via Wikipedia

At an IDF keynote, Intel launched “Tunnel Creek,” a new Atom E600 SoC processor. One particular processor detailed is codenamed “Stellarton,” which consists of the Atom E600 processor paired with an Altera FPGA on a multi-chip package that provides additional flexibility for customers who want to incorporate proprietary I/O or acceleration.

via Intel Debuts New Atom System-on-Chip Processor.

Intel has announced a future product that pairs an Intel Atom processor with a Virtex FPGA. Now this is interesting, I just mentioned FPGA (field programmable gate array) chips and out of the blue Intel has summoned the same chip and married it to a little Atom core processor. They say it could be used as an accelerator of some sort. I’m wondering what specifically they had in mind (something very esoteric and niche like a TCP/IP offload processor). I would like to see some touting of its possible uses and not just say, “We want to see what happens.” Unfortunately the way the competition works in Consumer Electronics, you never tell people what’s inside. You let folks like iFixit do a teardown and put pictures up. You let industry websites research all the chips and what they cost, estimate the ones that are custom Integrated Circuits and report the cost to manufacture the device. That’s what they do with every Apple iPhone these days.

It would be cool if Intel could also sell this as a development kit for Stellarton’s users. Keep the price high enough to prevent people from releasing product based just on the kit’s CPU, but low enough to get people to try out some interesting projects. I’m guessing it would be a great tool to use for video transcoding, Mux/DeMuxing for video streams, etc. If anyone does release a shipping product thought it would be cool if they put the “Stellarton Inside” logo, so we know that FPGAs are doing the heavy lifting. The other possibility Intel mentions is to use the FPGA as a proprietary I/O so possibly like an Infiniband network interface? I still have hopes it’s used in the Consumer Electronics world.