It’s not often that you see something that makes you think “this is a game changer.” The introduction of logic synthesis circa 1990 was one such event; today’s introduction of SDNet from Xilinx may well be another.
Cisco has used different RISC chips over the years as its network processors. Both in it’s network closet switches and the core router chassis. First generation was based on the venerable MIPS processor, then subsequently they migrated to PowerPC, both for power reduced processing but also network optimized cpus. Cisco’s engineers would accommodate changes in function by releasing new version of the IOS. Or they would release new line cards for the big multi-slot router chassis. Between software and hardware releases they would cover the whole spectrum of wired, wireless, optical networking. It was a rich mix of what could be done.
Enter now the possibility of not just Software Defined Networking (kind of like using Virtual Machines instead of physical switches), but software defined firmware/hardware. FPGAs (field programmable gate arrays) are the computing world’s reconfigurable processor. So instead of provisioning a fixed network processor, and virtualizing on top of that to gain the software defined network, what if you could work the problem from both ends? Reconfigure the software AND the network processor. That’s what Xilinx is proposing with this announcement of SDNet. The prime example given in this announcement is the line card that would slot into a a large router chassis (some Cisco gear comes with 13 slots). If you had just a bunch of ports, let’s say RJ-45 facing outward, what then happens on the inside via the software/hardware reconfigurability would astound you. You want Fibre Channel over Ethernet? You want 10Gbit? You want SIP traffic only? You don’t buy a line card per application because it’s set in stone what the function is. You tell the SDNet compiler these are the inputs, these are the outputs, please optimize the functions and reconfigure the firmware as needed.
Once programmed, that line card does what you tell it to do. It can inspect packets, it could act as a firewall, it could prioritize traffic, shape bandwidth or just simple route things as fast as it could possibly go. Doesn’t matter what signals are running over what pins, as long as it knows it’s RJ-45 connectors, it will do the rest. Amazing when you think about it that way.
Thali sounds like an amazing non-cloud centric enabling technology. And would be well worth the price of admission to use it. I cannot tell you how many times I remind friends when they complain about Facebook, that they are the product not Facebook. Thali makes each person their data center cloud with full rights to grant access to which ever fragment/shard of that existing “mesh” that you wish to any other individual person. Big Brother is only going to watch the data go through the series of tubes. He will not get notices or National Securities Letters from the FBI asking for all the data in your account as all the data is where it originated on YOUR devices, in YOUR possession. I think I’m getting the hang of this now and I find it very appealing. Can’t wait to learn more about Thali.
Originally posted on Jon Udell:
When Groove launched somebody asked me to explain why it was an important example of peer-to-peer technology. I said that was the wrong question. What mattered was that Groove empowered people to communicate directly and securely, form ad-hoc networks with trusted family, friends, and associates, and exchange data freely within those networks. P2P, although then much in vogue — there were P2P books, P2P conferences — wasn’t Groove’s calling card, it was a means to an end.
The same holds true for Thali. Yes it’s a P2P system. But no that isn’t the point. Thali puts you in control of communication that happens within networks of trust. That’s what matters. Peer networking is just one of several enablers.
Imagine a different kind of Facebook, one where you are a customer rather than a product. You buy social networking applications, they’re not free. But when you use those apps…
View original 284 more words
The president of VMware said after seeing it (and not knowing what he was seeing), “Wow, what movie is that?” And that’s what it’s all about — dispersion of disbelief. You’ve heard me talk about this before, and we’re almost there. I famously predicted at a prestigious event three years ago that by 2015 there would be no more human actors, it would be all CG. Well I may end up being 52% or better right (phew). - Jon Peddie
via Nvidia Pulls off ‘Industrial Light and Magic’-Like Tools | EE Times. Jon Peddie has covered the 3D animation, modeling and simulation market for YEARS. And when you can get a rise out of him like the quote above from EETimes, you have accomplished something. Between NVidia’s hardware and now its GameWorks suite of software modeling tools, you have in a word created Digital Cinema. Jon goes on to talk about how the digital simulation demo convinced a VMWare exec it was real live actors on a set. That’s how good things are getting.
And the metaphor/simile of comparing ILM to NVidia’s toolkits off the shelf is also telling. No longer does one need to have on staff computer scientists, physicists and mathematicians to help model, and simulate things like particle systems and hair. It’s all there along with ocean waves, and smoke altogether in the toolkit ready to use. Putting these tools into the hands of the users will only herald a new era of less esoteric, less high end, exclusive access to the best algorithms and tools.
nVidia GameWorks by itself will be useful to some people but re-packaging it in a way that embeds it in an existing workflow will widen the level of adoption.Whether that’s for a casual user or a student in a 3D modeling and animation course at a University. The follow-on to this is getting the APIs publishedto tap into this through current off the shelf tools like AutoCAD, 3D StudioMax, Blender, Maya, etc. Once the favorite tools can bring up a dialog box and start adding a particle system, full ray tracing to a scene at this level of quality, things will really start to take off. The other possibility is to flesh out GameWorks in a way that makes it more of a standalone, easily adopted brand new package creatives could adopt and eventually migrate to over time. That would be another path to using GameWorks as an end-to-end digital cinema creation package.
Where you’re going to see the biggest benefits to DDR4 is in the mobile/portable device category. On the desktop there might be a slight increase in speed, but not like the big bumps of previous generations of DDR architectural moves. So figure on 10% maybe depending on the chipset and CPU and the 3rd level cache on the cpu die. That combo is more likely to affect your overall system speed than just moving to DDR4 DIMMs on the motherboard.
Originally posted on Tech News for Geeks:
SK Hynix?s high-capacity DDR4 modules are based on Through Silicon Via (TSV) technology.
DDR4 capable motherboards and CPUs aren’t yet on the market, but that hasn’t stopped SK Hynix from putting some serious work into making high capacity DDR4 modules. On Tuesday the South Korean sem…
64bits now from Qualcomm using the ARM based architecture. The game is afoot the Apple A6 cpu is now going to compete with another 64bit cpu.
Originally posted on Tech News for Geeks:
Qualcomm finally unveils its high-end 64-bit enabled SoCs, but they won?t be available until the first half of 2015.
Qualcomm has launched its first batch of ARM v8-based SoCs last month, but this time around it is detailing its plans for the high-end segment. The Snapdragon 810 and …
Originally posted on Storage Swiss - Storage Switzerland:
It’s an accepted fact that a VDI environment can create some challenges for the IT infrastructure. Mashing hundreds of desktop workloads onto a disk array that was designed for more general-purpose applications can lead to poor or inconsistent performance. This can lead to another challenge, meeting user expectations.
To attend the webinar, please click below:
Performance and Expectations
When a VDI project is undertaken, it’s assumed that the user ‘experience’, how responsive their desktop applications are, will be the same or better than with the legacy infrastructure. Unfortunately, this isn’t always the case, based in the storage challenges that often accompany a VDI project. Making things worse, many users are getting spoiled by the performance of flash in tablets and flash boot drives in laptops.
This combination of demanding storage requirements and heightened user expectations has driven many companies to conclude that their VDI project must be supported by a…
View original 243 more words