We met with Mariko’s college friend to attend an exhibit of old art. It’s the oldest known example of fanciful artwork of anthropomorphic animals. What’s really odd is it was in the collection of an old Buddhist Temple in Kyoto, Japan.
It’s notable for art historians and pop culture types who follow manga (the written cartoons) and the anime (the animated cartoons). Some speculate and say these artworks are the historic precedent to Japan’s current day manga. I don’t know if that’s true but let me tell you, this exhibit was very crowded as it is closing down tomorrow (Sunday June 7th). I’ve been in long lines while in Japan. Both Tokyo Disney Sea and the Skytree had long lines. But we spent the whole part of 1 day waiting to see the vital first volume of the Choju-Giga. And it was worth it. It was very well drawn and funny. A lot of frogs and rabbits wrestling or presenting gifts to an Emperor figure drawn as a monkey. So here now is also the visual component of today’s outing as that’s what most people want anyways is to see where you’ve been rather than READ about where you’ve been.
I’m traveling in Japan and wanted to take advantage of my WordPress site and document my travel. Here now an essay on how I love NHK Morning shows. They do more international stories than their US counterparts. There was a short segment on India (I think as I’m not Japanese proficient). But I feel I can sometimes figure out the gist of these stories. When I don’t know exactly the story I fill in the details in my head. (screenshot below, note we were up early today)
Netlist and the Present
Netlist are the owners of the patent on key parts of Sandisk’s UtraDIMM technology (licensed from Diablo Technologies originally, I believe). While Netlist has lawsuits going back and forth regarding its intellectual property, it continued to develop products. Here now is the EXPRESSvaultTM EV3 announcement. It’s a PCI RAM disk of sorts that backs up the RAM with a ultracapacitor/battery combo. If power is lost an automated process backs up the RAM to onboard flash memory for safe keeping until power is restored. This design is intended to get around the disadvantages of using Flash memory as a disk and the wear and tear that occurs to flash when it is written to frequently. Less expensive flash memory suffers more degradation the more you write to it, eventually memory cells will fail altogether. By using the backing flash memory as failsafe, you will write to that flash only in the event of an emergency, thereby keeping the flash out of the grindstone of high levels of I/O. Note this is a very specific niche application of this technology but is very much the market for which Netlist has produced products in the past. This is their target market.
The Future is lower latencies-Enter UltraDIMM
Turn now to a recent announcement by Lenovo and it’s X6 server line announcing further adoption of the UltraDIMM technology. Lenovo at least is carrying on trying to sell this technology of Flash based memory interspersed with DRAMs. The idea of having “tiers” of storage with SSDs, UltraDIMMs and DRAM all acting in concert is the high speed future for the data center architect. Lucky for the people purchasing these things Netlist and Diablo’s legal wrangling began to sort itself out this Spring 2015: http://www.storagereview.com/diablo_technologies_gains_ground_against_netlist_in_ulltradimm_lawsuit
With a final decision being made fairly recently: http://www.diablo-technologies.com/federal-court-completely-dissolves-injunction/
Now Diablo and Sandisk and UltraDIMM can compete in the marketplace once more. And provide a competitive advantage to the people willing to spend the money for the UltraDIMM product. By itself UltraDIMM does make for some very interesting future uses. More broadly the adoption of an UltraDIMM like technology in laptops, desktops, tablets could see speed improvements across the board. Whether that happens or not is based more on the economics of BIOS and motherboard manufacturers than the merit of the design engineering of UltraDIMMs. More specifically Lenovo and IBM before that had to do a lot of work on the X6 servers to support the new memory technology. Which points to another article from the person I trust to collect all the news and information on storage worldwide, The Register’s Chris Mellor. I’ve followed his writing since about 2005 and really enjoyed his take on the burgeoning SSD market as new products were announced with faster I/O every month in the heady days of 2007 and beyond. Things have slowed down a bit now and PCIe SSDs are still the reference standard by which I/O benchmarks are measured. Fusion-io is now owned by Sandisk and everyone’s still enjoying the speed increases they get when buying these high end PCIe products. But it’s important to note for further increases to occur, just like with Sandisk’s use of UltraDIMM you have to keep pushing the boundaries. And that’s where Chris’s most recent article comes in.
Memory Meshes, Present and Future
Chris discusses the how Non-Volatile Memory Host Controller Interface (NVMHCI) came about as a result of legacy carry-over from spinning hard drives in the AHCI (Advanced Host Controller Interface) standard developed by Intel. AHCI and SATA (Serial ATA, the follow-on to ATA) both assumed spinning magnetic hard drives (and the speeds at which they push I/O) would be the technology used by a CPU to interact with it’s largest data store, the hard drive. Once that data store became flash memory, a new standard to drive faster access I/O and lower latencies needed to be invented. Enter the NVMe (Non-volatile Memory Express) interface, now being marketed and sold by some manufacturers. A native data channel from the PCI bus to your SSD however it may be designed, is the next big thing in hardware for SSDs. With the promise of better speeds it is worth migrating, once the manufacturers get onboard. But Chris’s article goes further to really look out beyond the immediate concerns of migrating from SATA to NVMe as even Flash memory may eventually be usurped by a different as yet unheard of technology. Given that’s the case, NVMe abstracts enough of the “media” of the non-volatile memory that it should allow future adoption of a number of possible technologies that could usurp the crown of NAND memory chips. And that potentially is a greater benefit than simply just squeezing out a few more Megabytes per second read and write speed. Even more tantalizing in Chris’s view is the mixing of DRAM and Flash memories in a “mesh” lets say of higher and lower speed memories like Fusion-io’s software uses to make the sharp distinction between DRAM and Flash less visible. In a sense, the speed would just come with the purchase of the technology, how it actually works would be the proverbial magic to the sysadmins and residents of Userland.
Originally posted on StorageSwiss.com - The Home of Storage Switzerland:
The ever-increasing density of virtual infrastructures, and the need to scale databases larger than ever, is creating an ongoing need for faster storage. And while flash has become the “go to” performance option, there are environments that still need more. Nonvolatile DRAM is the heir apparent, but it often requires customized motherboards to implement, for which widespread availability could be years away. Netlist, pioneer of NVRAM, has introduced a product that is viable for most data centers right now: the EXPRESSvaultTM EV3.
The Flash Problem
While flash has solved many performance problems, it also creates a few. First there is a legitimate concern over flash wear, especially if the environment is write-heavy. There is also a concern about performance. While flash is fast compared to hard disk drives it’s slow when compared to RAM, especially, again, on writes.
But flash does have two compelling advantages over DRAM. First it is…
View original 394 more words
Hardee’s/Carl’s Jr. Slaps A Hot Dog & Potato Chips On A Cheeseburger, Calls It “Most American Thickburger”
In Rochester NY, they would serve it open faced and call it a “Garbage Plate”.
Originally posted on Consumerist:
When it comes to stacking meat-upon-meat, pretty much nothing surprises us these days. So a hot dog on a hamburger? Pretty much inevitable (see: bacon on hamburgers). Adding potato chips? Sure, why not get it all done with at once. That’s the lineup for the Carl’s Jr./Hardee’s upcoming Most American Thickburger.
Along with the meat and Lay’s chips will be ketchup, mustard, tomato, red onion, pickles and American cheese, reports the Associated Press, with the whole thing weighing in at 1,030 calories and 64 grams of fat. It goes on sale for $5.79 alone or $8.29 for a combo at both restaurants starting May 20.
“The hot dog is like a smoked meat product, so it’s not unlike bacon,” Brad Haley, chief marketing officer of CKE Restaurants, the owner of Carl’s Jr. and Hardee’s told the Associated Press. “We’ve had this idea, believe it or not, for a long time,”…
View original 58 more words
Full credit goes to Mark Guzdial and his blog: Computing Education
An interesting article by Amy Bruckman about both being a good software customer (knowing how software is developed and maintained). The reverse side of this is teaching professional ethics to the developer/web-designer/programmer selling their services to people. It seems still there’s very much a Wild West, frontier days attitude similar to year 2000, Internet Bubble era. Once both sides of the transaction are fully educated, much better outcomes will occur I believe.
Originally posted on Computing Education Blog:
My colleague, Amy Bruckman, wrote a blog post about the challenges that nonprofits face when trying to develop and maintain software. She concludes with an interesting argument for computing education that has nothing to do with learning programming that everyone needs. I think it relates to my question: What is the productivity cost of not understanding computing? (See post here.)
This is not a new phenomenon. Cliff Lampe found the same thing in a study of three nonprofits. At the root of the problem is two shortcomings in education. So that more small businesses and nonprofits don’t keep making this mistake, we need education about the software development process as part of the standard high-school curriculum. There is no part of the working world that is not touched by software, and people need to know how it is created and maintained. Even if they have no intention of becoming…
View original 108 more words
Re-use, the connotation springs eternal in many facets of our daily and professional lives. Reduce, Re-use, Recycle until it comes to a “learning object”. Then it is as Mike points out a difficult, fragile row to hoe. It’s easier to just start over from scratch rather than build off or stand on the shoulders of the “other person”, what created the learning object. Instead of re-use, maybe what should attempt to do, or maybe NOT do is reinvent. You may not be able to re-use, and if you chose to not re-use, at the very least don’t reinvent. That may be the best use of a learning object. And I think that’s a better use of people’s most valuable resources (1.Time 2.Attention). So hear, hear to Mike Caulfield, it’s absolutely true what he’s saying about the promise vs. reality of re-use for PowerPoint and a lot of other “publishing” or “document-oriented” tools.
Originally posted on Hapgood:
I’m just back from some time off, and I’m feeling too lazy to finish reading the McGraw-Hill/Microsoft Open Learning announcement. Maybe someone could read it for me?
I can tell you where I stopped reading though. It was where I saw that the software was implemented as a “PowerPoint Plugin”.
Now, I think that the Office Mix Project is a step in the right direction in a lot of ways. It engages with people as creators. It creates a largely symmetric reading/authoring environment. It corrects the harmful trend of shipping “open” materials without a rich, fork-friendly environment to edit them in. (Here’s how you spot the person who has learned nothing in the past ten years about OER: they are shipping materials in PDF because it’s an “open” format).
The PowerPoint problem is that everything in that environment encourages you to create something impossible to reuse. Telling people to…
View original 426 more words
If Adobe can do something like this and keep all the files in situ on a server hard drive “somewhere” on the Internet, there’s no telling what’s possible. I waste more of my professional hours copying stuff from place to place over network connections. Keeping everything in one container and being able to edit and view from that same container, that would be incredible. That would be like giving me back 20 hours of my work week.
Originally posted on TechCrunch:
Aframe, the London-headquarted startup that is taking on industry giants like Avid with its cloud-based video production platform, is one step closer to founder David Peto’s vision to put professional-grade video editing in the cloud.
The company, which is backed by the likes of Octopus Investments, Eden Ventures, and Northstar Ventures, is teaming up with Adobe via its ‘Adobe Anywhere’ platform to enable broadcasters and other content producers to edit large-scale video projects “remotely and securely” via the cloud. The pay-off being that, as the cloud has done for other industries, the need to pay out for costly infrastructure and related equipment is greatly eliminated.
“We’ve been growing Aframe rapidly – think of us now as the operating system for video in the cloud – we give anyone (broadcasters, corporations etc.) one central place where they can do everything they need to do with video, no matter what stage…
View original 378 more words