Nice writeup from Anandtech regarding the press release from LSI about it’s new 3rd generation flash memory controllers. The 3000 series takes over from the 2200 and 1200 series that preceded it as the era of SSDs was just beginning to dawn (remember those heady days of 32GB SSD drives?). Like the Frontier days of old, things are starting to consolidate and find an equilibrium of price vs. performance. Commidity pricing rules the day, but SSDs much less PCIe Flash interfaces are just creeping into the high end of the market of Apple laptops and soon Apple desktops (apologies to the iMac which has already adopted the PCIe interface for its flash drives, but the Mac Pro is still waiting in the wings).
Things continue to improve in terms of future-proofing the interfaces. From SATA to PCIe there was little done to force a migration to one or the other interface as each market had its own peculiarities. SSDs were for the price conscious consumer level market, and PCIe was pretty much only for the enterprise. You had pick and choose your controller very wisely in order to maximize the return on a new device design. LSI did some heavy lifting according to Anandtech by refactoring, redesigning the whole controller thus allowing a manufacturer to buy one controller and use it either way as a SATA SSD controller or as an PCIe flash memory controller. Speeds of each interface indicate this is true at the theoretical throughput end of the scale. LSI reports the PCIe throughput it not too far off the theoretical MAX, (~1.45GB/sec range). Not bad for a chip that can also be use as an SSD controller at 500MB/sec throughput as well. This is going to make designers and hopefully consumers happy as well.
On a more technical note as written about in earlier articles mentioning the great Peak Flash memory density/price limit, LSI is fully aware of the memory architectures and the faillure rates, error rates they accumulate over time.
This image is of a screencap from the documentary Doctor Who: Origins, it is intended for use in the article “Delia Derbyshire” to visually aid and provide critical commentary in describing the subject of the article. (Photo credit: Wikipedia)
It’s 50 years since the first episode of Dr. Who aired on the BBC. How will you celebrate? I’m reading this wonder tract from The Register.com (British Tech website) and marveling both at the detail and expertise that went into original recordings made by Derbyshire and the subsequent writing that tells the history of it. Amazing story-telling and amazing work all in one.
In the pre-computer, pre-synthesizer, pre-sampler era everything had to be done using razor blades and 1/4-1/2″ audio tape. There were no midi timing signals or timecode, there were only the splices and the china marker on the back side to tell where things go. And in addition to that there was the composition and creation of the sounds that first needed to be captured to tape. Whether it was test tone generators or found sound, it all was fodder for the final mix. And since none of these items actually were not accurate they needed to be further processed into something like scalar notes. This was the alchemy and magic that went into recording of the original Dr. Who theme.
This article is rather long, but totally worth it as it goes into the greatest detail to date of the BBC Radiophonic Workshop and Delia Derbyshire got the Dr. Who theme recorded and on air back in Nov. 1963.
I note from iFixit.com tear downs in the recent past that an Apple iPhone can sometimes require lots of heating and softening of adhesive. iPhone 5s while it requires a tool, at least shows some sign of requiring less heating/melting of adhesive. The tool I’ve linked to is a special tool to pull the front and back halves apart on the iPhone 5s.
It’s called the iSclack… 2 suction cups mounted onto a pair of special clamp-like pliers. Once the cups are attached, you just apply pressure and try to pop open the case. Note, you’ll also need to invest in the specialty penta-lobe screw driver that Apple choses to use on its iDevices to prevent casual opening of the cases by anyone other than a certified Apple maintenance person. Aside from the use of penta-lobe screws, I think iPhone 5s is probably not a bad phone to get. And more because of the lack of excess adhesive making the whole thing more water tight but harder to repair.
The use of adhesive forced iFixit to come up with a special heating device, that would lay on the perimeter of the iPhone after it was heated up in a microwave. That one is called iOpener. So things are improving as the iOpener is not absolutely necessary for iPhone 5s. Let’s hope it continues to improve that way.
English: M in blue square (similar to seen on ) (Photo credit: Wikipedia)
Desktop Support in the raw (boring!)
One of the many things I would set about doing at my old job was getting things updated to install new Windows/Office disk images for rebuilt or newly built desktops and laptops. One of the first tasks was to build a from scratch WIM from the base install media (Win7 install.wim and the Office 2010 Pro .iso disk images) Now I want to customize the OCT (Office Customization Tool) and get the Office 2010 install just right for a first install on a newly rebuilt system. I’ve played with the OCT in the past, but there’s also an unattend.xml file one could use instead. Might go that route now that I got the Win7 setup running under autounattend.xml (no more OCT)
One thing that is making this XML learning process easier is the edit xml, install test build, observe failures and re-edit round trip is an Oracle VirtualBox I’ve got installed. I’m using the Win7 Setup .iso as a mount point within Virtual Box. It is the first CD drive. Then I have the autounattend.xml sitting in another .iso which I mount as the secondary CD drive. That combo forces the Win7 setup to ‘see’ the autounattend.xml file and start customizing the install as it goes along. One of the cool utilities included with the Windows Automated Install Kit (WAIK 3.1) is a command line program called oscdimg, it will create a .iso out of any folder you choose. That’s what I do to create that secondary CD mount point in VirtualBox. And I never once have to change it, all I have to do is create a new .iso every time I edit the autounattend.xml file (each just adding or deleting a comma and I can start all over again without reconfiguring the VM!) This has saved me countless hours of attempting to do this on real hardware (which is absolutely unnecessary in this case) until I can get it just right.
And let me say there’s a lot of ground to cover and barriers to entry before you can get it ‘just right’. Some of the features provided in Autounattend.xml don’t work. Literally hands down, attempting to add a trusted zone in IE during the Win7 Setup process doesn’t work. And better yet, there’s discussion board entries that CONFIRM it doesn’t work. I’m so glad people participate in these company sponsored fora for the whole world to see. I’m so glad, I didn’t beat my head and heart out trying to get this one ‘feature’ nay, ‘bug’ to work properly. There are a multitude of other ways to achieve the same goal, so I’m pursuing it doing the Copy Profile = true route and add the Trusted Zone URLs to my admin profile let that become the default profile on the machine. Then capture the whole kitten-ka-boodle using Sysprep/GImagex on that idealized Dell Optiplex 960 with all drivers persisted. That’s going to be my universal WIM to start out with. We’ll see how close I can hit that mark.
As it turns out I did go through a number of revisions of this disk image until I perfected it by Feb. 2013. Then I updated the drivers and patches and so forth in June to come out with a grand final WIM for doing all the desktops, laptops for my old office. Since then I had to turn this work over to a contractor who just got hired full-time. He’s now got an updated WIM file using the Virtual Box as a kind of Virtual Build Lab for both updated, creating and applying the WIM images. That work then allows us to put it onto a WinPE flash drive and apply it as needed for a full-touch manual image of a computer. I’m still holding out hope that this can be improved and be less manual, less high-touch than in the past. One further refinement along those lines was adding a “drivers path” to the Windows unattend.xml file. That allowed us to robocopy the drivers for a particular machine into a known folder path on the newly imaged machine (no matter which one it was) and it would just suck up all the drivers on the OOBE steps on the first/second reboots after the machine was imaged for the first time. Heady stuff and I have to say once you start tweaking it speeds stuff up a lot and it just works! It never breaks or wrecks the process. So it’s very reliable to make those single changes that take a step out of the (re)imaging process.
I’m still playing around with these ideas and have trained my replacement on how to use them. Next steps are Windows ADK 8.1, WinPE5 builds and coming up with an image with Office 2013 and Win8.1. I think that will become our next reference standard before long.
I’m going to look into this further. I have a home renovation project page, but it’s not well integrated into the rest of the Blog. Might have to restructure/re-factor the blog a bit.
Interesting to know all of what goes into Amazon Web Services. They are the 800 lb. Gorilla of cloud computing and they continue to cut prices every day. amazing.
If there’s anyone still left wondering how it is that large cloud providers can keep on rolling out new features and lowering their prices even when no one is complaining about them, Amazon Web Services Vice President and Distinguished Engineer James Hamilton spelled out the answer in one word during a presentation Thursday at the company’s re:Invent conference: Scale.
Scale is the enabler of everything at AWS. To express the type of scale he’s talking about, Hamilton noted an oft-cited statistic — that AWS adds enough capacity every day to power the entirety of Amazon.com when was it was a $7 billion business. “In fact, it’s way bigger than that,” he added. “It’s way bigger than that every day.”
Seven days a week, the global cycle of building, testing, shipping, racking and deploying AWS’s computing gear “just keeps cranking,” Hamilton said. AWS now has servers deployed in nine regions across the world…
What would happen if we replaced those 16 disk-based V7000s with all-flash V7000s? Each of the disk-based ones delivered 32,502.7 IOPS. Let’s substitute them with 16 all-flash V7000s, like the one above, and, extrapolate linearly; we would get 1,927,877.4 SPC-1 IOPS – nearly 2 million IOPS. Come on IBM: go for it.
That’s right, IBM is understanding the Flash-based SSD SAN market and is making some benchmark systems to help market its disk arrays. Finally we’re seeing some best case scenarios for these high end throughput monsters. It’s entirely possible to create a 2Million IOPS storage SAN. You just have to assemble the correct components and optimize your storage controllers. What was once a theoretical maximum throughput (1M IOPs) is now achievable without anything more than a purchase order and an account representative from IBM Global Services. It’s not cheap, not by a longshot but your Big Data project or OLAP with Dashboard may just see orders of magnitude increases in speed. It’s all just a matter of money. And probably some tweaking via an IBM consultant as well (touche).
Granted that IBM doesn’t have this as a shipping product isn’t really the point. On paper what can be achieved by mixing matching enterprise storage appliances and disk arrays and software controllers is beyond what any other company is selling IS the point. There’s a goldmine to be had if anyone outside of a high frequency trading skunkworks just shares a little bit of in-house knowledge product familiarity. No doubt it’s not just the network connections that make things faster it is the IOPs that will out no matter what. Write vs. Read and latency will always trump the fastest access to an updated price in my book. But I don’t work for a high-frequency trading skunkworks either, I’m not privy to the demands made upon those engineers and consultants. But still we are now in the best, boldest time yet of nearly too much speed on the storage front. Only thing holding us back is the network access times.
Bill Atkinson—creator of MacPaint—painted in MacPaint (Photo credit: ✖ Daniel Rehn)
“I missed the mark with HyperCard,” Atkinson lamented. “I grew up in a box-centric culture at Apple. If I’d grown up in a network-centric culture, like Sun, HyperCard might have been the first Web browser.
Bill Atkinson‘s words on HyperCard and what could have been are kind of sad in a way. But Bill is a genius by any measure of Computer Science and programming ability. Without QuickDraw, the Mac would not have been much of a graphical experience for those attempting to write software for the Mac. Bill’s drawing routines took advantage of all the assembly language routines available on the old Motorola 68000 chip and eked out every last bit of performance to make the Mac what it was in the end; Insanely Great.
I write this in reference also to my experience of learning and working with HyperCard. It acts as the opening parenthesis to my last 16 years working for my current employer. Educational Technology has existed in various forms going all the way back to 1987 when Steve Jobs was attempting to get Universities to buy Macs and create great software to run on those same computers. There was an untapped well of creativity and energy that Higher Education represented and Jobs tried to get the Macintosh computer in any school that would listen.
The period is long since gone. The idea of educational software, interactive hypermedia, CD-ROMs all gone the way of the web and mobile devices. It’s a whole new world now, and the computer of choice is the mobile phone you pick-up on 2 year contract to some telecom carrier. That’s the reality. So now designers and technologists are having to change to a “mobile first” philosophy and let all other platforms and form factors follow that design philosophy. And it makes sense as desktop computer sales still erode a few percentage points each year. It’s just a matter of time before we reach peak Desktops. It’s likely already happened, we just haven’t accepted it as gospel.
Every technology is a stepping stone or shoulder to stand on leading to the next stepping stone. Evolutionary steps are the rule of the day. Revolution has passed us by. We’re in for the long slog, putting things into production making them do useful work. Who has time to play and discover when everyone has a pre-conceived notion of the brand device and use it will serve. I want X to do Y, no time to advise or consult to fit and match things based on their essential quality or essence of what they are good at accomplishing. This is the brand and this is how I’m going to use it. That’s what Educational Technology has become these days.
There is no end to the amount of stuff I get asked to do. I like the technical aspects and not so much the other bits. There is a lot of communications and expectation setting. And therein lies the rub. (more…)