The $35 Raspberry Pi “Model B” is board of choice to ship out to consumers first. It contains two USB ports, 256 MB of RAM, an Ethernet port and a 700 MHz Broadcom BCM2835 SoC. The Videocore 4 GPU within the SoC is roughly the equivalent to the original Xboxs level of performance, providing Open GL ES 2.0, hardware-accelerated OpenVG, and 1080p30 H.264 high-profile decode.
Raspberry Pi boards are on the way and the components list is still pretty impressive for $35 USD. Not bad, given they had a manufacturing delay. The re-worked boards should ship out as a second batch once they have been tested fully. It also appears all the other necessary infrastructure is slowly falling into place to help create a rich environment for curious and casually interested purchasers of the Raspberry Pi. For instance let’s look at the Fedora remixes for Raspberry Pi.
A remix in the Open source software community refers to a distribution of an OS that can run without compiling on a particular chip architecture whether it be the Raspberry Pi Broadcom chip or an Intel x86 variety. In addition to the OS a number of other pre-configured applications will be included so that you can start using the computer right away instead of having to download lots of apps. The best part of this is not only the time savings but the lowering of the threshold to less technical users. Also of note is the particular Fedora OS distributions chosen LXDE and XFCE both noted for being less resource intensive and smaller in physical size. The documentation on the Fedora website indicates these two distros are geared for older less capable, less powerful computers that you would still like to use around the house. And for a Raspberry Pi user, getting a tuned OS specifically compiled for your CPU and ready to go is a big boon.
What’s even more encouraging is the potential for a Raspberry Pi community to begin optimizing and developing a new range of apps specifically geared towards this new computer architecture. I know the Fedora Yum project is a great software package manager using the RPM format for adding and removing software components as things change. And having a Yum app geared specifically for Raspberry Pi users might give a more App store like experience for the more casual users interested in dabbling. Right now there’s a group at Seneca College in Toronto, CA doing work on an app store-like application that would facilitate the process off discovering, downloading and
Raspberry Pi project
trying out different software pre-compiled for the Raspberry Pi computer.
Presented for your approval. to you dear reader. These two seemingly benign mobile phone pics were taken from a nameless, faceless supermarket store chain (I will withhold their name for now).
I have shopped at this store regularly since around 2005 or so and since at least 2006 have shopped there once every two weeks spending anywhere from $50-$100 (not much compared to big families I’m sure). But what ticks me off is that no matter how refined the Inventory and Tracking systems are for any regional grocery store chain, and no matter how much I use my ‘loyalty’ surveillance card for that particular regional grocery store, they still seem to be stuck in the 20th Century. I say that as I have observed the following bad habits time after time, and not just with one regional grocery store but with the top two store chains where I live. Neither one, no matter how much data they collect can seem to keep carrying some items I purchase once every two weeks. Or maybe less frequently than that depending on the product. For instance look at the two photos presented at the top of the webpage. On the left you see what I would describe conservatively as the ‘oddball’ or ‘old-fashioned’ specialty laundry supply shelf. In this regional grocery store, that’s on the very top shelf where it’s less convenient to reach up and haul down a 5 pound box of some kind of laundry product. All the ‘average’ mass market stuff is at waist level or on the bottom where it can easily be slid onto the shopping cart’s bottom cargo shelf (so it’s already a struggle to get to this stuff). The product I like to purchase is the Calgon water softener. Why you ask? Well let me first turn back the clock to the 1970s and this old TV commercial:
As a kid I didn’t do laundry. My Mom would do all the washing and folding up until I was in my later teens. That’s when I had a few experiences washing and folding myself (and occasionally fixing the washing machine too!). I never once in that time really thought about fabric softener or any of those additives market to the heads of households. Whether it was for fragrance or softness or any of those other qualities I didn’t really care once I left home. I just wanted the soap to dissolved completely, do it’s job and rinse completely out of the clothes. Over time the soap/detergent issue came up time and again where a load of laundry would be fouled with undissolved detergent granules that just wouldn’t rinse clean. Which is EXACTLY the opposite of what a clothes washer is supposed to do. At the very least, a clothes washer should do no harm, and not make your clothes dirtier by leaving this sugar like residue clinging tenaciously to your jeans and shirts. But I digress, what I’ve always wanted was a measure of insurance that I wouldn’t suffer from undissolved detergent. The surest way towards that is using really hot water at the initial stage or using a chemical like Calgon to help all the detergent mix into the water. And that’s the refinement I eventually developed all on my own living by myself, doing my own laundry. It took years to get to this point.
So the day that I eventually caved into buying Calgon (I don’t remember when it was exactly but I did it some years before I got married), I stuck with it. My grocery store had no problem keeping that product in stock along with such other oddities as 20 Mule Team Borax and Color Safe Clorox bleach (in the blue box). You can see both of them in the pictures above. The other store I visit also keeps a handy supply of Fel’s Naptha and Downy Flakes as well for the people who crave the old-fashioned products that aren’t designed to ‘Do-it-All’. In fact I think I’ve even seen little blue bottles of ‘Bluing Agent’ to get white dress shirts extra white too. I fully understand the connection, nay emotional tie some seniors and very valuable store customers might have to their favorite brand name cleaner. I too count myself among their ranks.
However, now you can imagine my surprise when I discovered for the first time in 9 years or more that my grocery store has suddenly run out of Calgon. Worse yet as the ‘Before’ picture shows, it’s GONE. No shelf label, no space set aside. It would have been located roughly in that gap between the two Oxy-Clean bottles near the middle (one with green cap, one with yellow). That’s where the Calgon had been sitting for literally 9 years at that store. But I paused and I thought I might become and old man and complain bitterly that, ‘they keep moving things in this store, I can’t find anything’. In fact I did a hardcore search up and down and on successive visits, never once seeing the Calgon return. So I gave up. I stopped using it because I couldn’t find it anywhere else. Months pass, almost 5 months in fact. Out of the blue I decided once more to look and see if they ever got anymore Calgon boxes. I had not looked in that long because it showed no sign of ever returning. I even had looked at buying it by the case online through Amazon (minimum 10 boxes per case at roughly $5.20 per box=$52.00 plus shipping). When I looked this time however, I found it!
Image via Wikipedia
Calgon had magically re-appeared not in the same spot, but at least near it’s friend 20 Mule Team Borax on the top shelf as always. There it was, and not just one box. I counted at least 7 boxes in total so someone must have purchased at least 3 boxes out of the case they put on the shelf. Whew! I thought, how lucky am I that whatever oversight, misstep or mistake was made it is now rectified. But it wasn’t enough for me, to just be happy and let this go. I have had more than one of these episodes occur at both the grocery stores I visit. Let me tell you another story about a loyal shopper in search of a brand name product that suddenly vanishes altogether.
My favorite gum, Trident Xtra Care (in any flavor whatsoever, I’m not picky)
Trident Xtra Care gum, I’ve seen it come and go. And now I can’t find it anywhere even after a small glimmer of hope at a national drug store chain. I’ve been buying it every week from two different supermarkets. And yet, no love in return. I had hope when one of the supermarkets it started carrying it after dropping it for a while. Now even the drugstore where I had found a stash of gum has now dropped it too.
Why can’t stores in my area keep this in stock?
Hershey’s Extra Dark is not the same as Hershey’s Special Dark. They are in different leagues, worlds apart from one another.
Not as good! Hershey Special DarkGood! Hershey Extra Dark
Special Dark as you recall from your trick-or-treating days is the Bit-0-honey of the Hershey’s Mini Assortment bag. It was like black licorice, blech! It wasn’t all that special, but more bitter than anything else. I despise Hershey Special Dark. However it’s cousin Hershey Extra Dark is different. It’s a 60% Cocoa dream and smoother than any Cadbury, Ghirardelli or Scharffen-Berger. It is the most inexpensive choice save for Cadbury but Cadbury Dark is a dead ringer for Hershey Special Dark and just as objectionable from a taste standpoint. However as I have been pointing out, my favorite product apparently is too difficult for the local grocery stores to keep in stock. I have to go for weeks without a decent chocolate bar usually ending in me buying a Cadbury Special Dark which as I have said is no different than Hershey Special Dark. The best way for me describe it is like eating Nestle bittersweet chocolate morsels (somewhat bitter but WAY too much sugar and 0% cocoa butter).
I guess I should be thankful I make enough money to buy these items regularly. I am so lucky, how lucky I am to have the ability to earn money and have spare time to write about these minor annoyances. It’s true. But at the same time I am achingly curious over the decisions that drive what stores choose to stock and those they let lapse through a fiscal quarter and fiscal year. Is it all a big mistake or is it absolutely necessary to meet your quarterly sales targets? So one customer (namely ME) is inconvenienced and is unlikely to say or do anything about their favorite product going missing without explanation. But this is where I’m drawing the line and asking why, especially give the technology underlying the whole product mix and stocking practices at any retailer. Those guys know what they are doing and I’m an unhappy customer. I am writing this as a way of identifying the damage in the network and will have to begin routing around just like the Internet. Goodbye Supermarket brick and mortar store, hello Amazon dot Com.
Chip designer and chief Intel rival AMD has signed an agreement to acquire SeaMicro, a Silicon Valley startup that seeks to save power and space by building servers from hundreds of low-power processors.
It was bound to happen eventually, I guess. SeaMicro has been acquired by AMD. We’ll see what happens as a result as SeaMicro is a customer of Intel’s Atom chips and now most recently Xeon server chips as well. I have no idea where this is going or what AMD intends to do, but hopefully this won’t scare off any current or near future customers.
SeaMicro’s competitive advantage has been and will continue to be the development work they performed on that custom ASIC chip they use in all their systems. That bit of intellectual property was in essence the reason AMD decided to acquire SeaMicro and hopefully let it gain an engineering advantage for systems it might put out on the market in the future for large scale Data Centers.
While this is all pretty cool technology, I think that SeaMicro’s best move was to design its ASIC so that it could take virtually any common CPU. In fact, SeaMicro’s last big announcement introduced its SM10000-EX option, which uses low-power, quad-core Xeon processors to more than double compute performance while still keeping the high density, low-power characteristics of its siblings.
So there you have it Wired and The Register are reporting the whole transaction pretty positively. Looks on the surface to be a win for AMD as it can design new server products and get them to market quickly using the SeaMicro ASIC as a key ingredient. SeaMicro can still service it’s current customers and eventually allow AMD to up sell or upgrade as needed to keep the ball rolling. And with AMD’s Fusion architecture marrying GPUs with CPU cores who knows what cool new servers might be possible? But as usual the nay-sayers the spreaders of Fear, Uncertainty and Doubt have questioned the value of SeaMicro and their original product he SM-10000.
Diane Bryant, the general manager of Intel’s data center and connected systems group at a press conference for the launch of new Xeon processors had this to say, ““We looked at the fabric and we told them thereafter that we weren’t even interested in the fabric,” when asked about SeaMicro’s attempt to interest Intel in buying out the company. To Intel there’s nothing special enough in the SeaMicro to warrant buying the company. Furthermore Bryant told Wired.com:
“…Intel has its own fabric plans. It just isn’t ready to talk about them yet. “We believe we have a compelling solution; we believe we have a great road map,” she said. “We just didn’t feel that the solution that SeaMicro was offering was superior.”
This is a move straight out of Microsoft’s marketing department circa 1992 where they would pre-announce a product that never shipped was barely developed beyond a prototype stage. If Intel is really working on this as a new product offering you would have seen an announcement by now, rather than a vague, tangential reference that appears more like a parting shot than a strategic direction. So I will be watching intently in the coming months and years if needed to see what if any Intel ‘fabric technology’ makes its way from the research lab, to the development lab and to final product shipping. However don’t be surprised if this is Intel attempting to undermine AMD’s choice to purchase SeaMicro. Likewise, Forbes.com later reported from a representative from SeaMicro that their company had not tried to encourage Intel to acquire SeaMicro. It is anyone’s guess who is really correct and being 100% honest in their recollections. However I am still betting on SeaMicro’s long term strategy of pursuing low power, ultra dense, massively parallel servers. It is an idea whose time has come.
I’ve seen the future, and not only does it work, it works without tools. It’s moddable, repairable, and upgradeable. Its pieces slide in and out of place with hand force. Its lid lifts open and eases shut. It’s as sleek as an Apple product, without buried components or proprietary screws.
Oh how I wish this were true today for Apple. I say this as a recent purchaser of a Apple re-furbished iMac 27″. My logic and reasoning for going with the refurbished over new was based on a few bits of knowledge gained reading Macintosh weblogs. The rumors I read included the idea that Apple repaired items are strenuously tested before being re-sold. In some cases return items are not even broken, they are returns based on buyers remorse or cosmetic problems. So there’s a good chance the logic board and lcd have no problems. Now reading back this Summer just after the launch of Mac OS X 10.7 (Lion), I read about lots of problems with crashes off 27″ iMacs. So I figured a safer bet would be to get a 21″ iMac. But then I started thinking about Flash-based Solid State Disks. And looking at the prohibitively high prices Apple charges for their installed SSDs, I decided I needed something that I could upgrade myself.
But as you may know iMacs over time have never been and continue to remain not user up-gradable. However, that’s not to say people haven’t tried or succeeded in upgrading their own iMacs over the years. Enter the aftermarket for SSD upgrades. Apple has attempted to zig and zag as the hobbyists swap in newer components like larger hard drives and SSDs. Witness the Apple temperature sensor on the boot drive in the 27″ iMac, where they have added a sensor wire to measure the internal heat of the hard drive. As the Mac monitors this signal it will rev-up the internal fans. Any iMac hobbyist attempting to swap out a a 4TByte or 3TByte drive for the stock Apple 2TByte drive will suffer the inevitable panic mode of the iMac as it cannot see its temperature sensor (these replacement drives don’t have the sensor built-in) and assumes the worst. They say the noise is deafening when those fans speed up, and they never, EVER slow down. This Apple’s attempt insure sanctity through obscurity. No one is allowed to mod or repair, and that means anyone foolish enough to attempt to swap their internal hard drive on the iMac.
But, there’s a workaround thank goodness and that is the 27″ iMac whose internal case is just large enough to install a secondary hard drive. You can slip a 2.5″ SSD into that chassis. You just gotta know how to open it up. And therein lies the theme of this essay, the user upgradable, user friendly computer case design. The antithesis of this idea IS the iMac 27″ if you read these steps from iFixit and the photographer Brian Tobey. Both of these websites make clear the excruciating minutiae of finding and disconnecting the myriad miniature cables that connect the logic board to the computer. Without going through those steps one cannot gain access to the spare SATA connectors facing towards the back of the iMac case. I decided to go through these steps to add an SSD to my iMac right after it was purchased. I thought Brian Tobey’s directions were just slightly better and had more visuals pertinent to the way I was working on the iMac as I opened up the case.
It is in a word a non-trivial task. You need the right tools, the right screwdrivers. In fact you even need suction cups! (thankyou Apple). However there is another way, even for so-called All-in-One style computer designs like the iMac. It’s a new product from Hewlett-Packard targeted for the desktop engineering and design crowd. It’s an All-in-One workstation that is user upgradable and it’s all done without any tools at all. Let me repeat that last bit again, it is a ‘tool-less’ design. What you may ask is a tool-less design? I hadn’t heard of it either until I read this article in iFixit. And after having followed the links to the NewEgg.com website to see what other items were tagged as ‘tool-less’ I began to remember some hints and stabs at this I had seen in some Dell Optiplex desktops some years back. The ‘carrier’ bracket for the CD/DVD and HDD drive bays were these green plastic rails that just simply ‘pushed’ into the sides of the drive (no screws necessary).
And when I considered my experience working with the 27″ iMac actually went pretty well (it booted up the first time no problems) after all I had done to it, I consider myself very lucky. But it could have been better. And there’s no reason it cannot be better for EVERYONE. It also made me think of the XO Laptop (One Laptop Per Child project) and I wondered how tool-less that laptop might be. How accessible are any of these designs? And it also made me recall the Facebook story I recently commented on about how Facebook is designing its own hard drive storage units to make them easier to maintain (no little screws to get lost and dropped onto a fully powered motherboard and short things out). So I much more hope than when I first embarked on the do it yourself journey of upgrading my iMac. Tool-less design today, Tool-less design tomorrow and Tool-less design forever.
Microsoft and University of California San Diego researchers have said flash has a bleak future because smaller and more densely packed circuits on the chips silicon will make it too slow and unreliable. Enterprise flash cost/bit will stagnate and the cutting edge that is flash will become a blunted blade.
More information regarding semiconductor manufacturers rumors and speculation of a wall being hit in the shrinking down of Flash memory chips. (see this link to the previous Carpetbomber article from Dec. 15). This report has a more definitive ring to it as actual data has been collected and projections based on models of that data. The trend according to these researchers is lower performance due to increasingly bad error rates and signaling on the chip itself. Higher Density chips = Lower Performance per memory cell.
To hedge against this dark future for NAND flash memory companies are attempting to develop novel and in cases exotic technology. IBM has “racetrack memory“, Hewlett-Packard and Hynix have MemRistor and the list goes on. Nobody in the industry has any idea what comes next so bets are being placed all over the map. My advice to anyone reading this article is do not choose a winner until it has won. I say this as someone who has watched a number of technologies fight for supremacy in the market. Sony Betamax versus JVC VHS, HD-DVD versus Blu-ray, LCD versus Plasma Display Panel, etc. I will admit at times the time span for these battles can be waged over a longer period of time, and so it can be harder to tell who has won. But it seems to be shorter time spans over the life of these products as more recent battles have been waged. And who is to say, Blu-ray hasn’t really been adopted widely enough to say it is the be all and end all as DVD and CD disks both are used widely as recordable media. Just know that to go any further in improving the cost vs. performance ratio NAND will need to be forsaken to get to the next technological benchmark in high speed, random access, long term, durable storage media.
Things to look out for as the NAND bandwagon slows down are Triple Level Memory cells, or worse yet Quadruple Level cells. These are not going to be the big saviors the average consumer hopes they will be. Performance of Flash memory that gangs up the memory cells also has higher error rates at the beginning and even higher over time. The amount of cells assigned for being ‘over-provisioned’ will be so high as to negate the cost benefit of choosing the higher density memory cells. Also being touted as a way forward to stave off the end of the road are error correcting circuits and digital signal processors onboard the chips and controllers. As the age of the chip begins to affect its reliability, more statistical quality control techniques are applied to offset the losses of signal quality in the chip. This is a technique used today by at least one manufacturer (Intel), but how widely it can be adopted and how successfully is another question altogether. It would seem each memory manufacturer has its own culture and as a result, a technique for fixing the problem. Who ever has the best marketing and sales campaigns will as past history has shown will be the winner.
I don’t know how accurate or specific this criticism of Apple’s Press conference from Wednesday is, but many people are commenting on it. I contributed a comment as well.
Now, Facebook has provided a new option for these big name Wall Street outfits. But Krey also says that even among traditional companies who can probably benefit from this new breed of hardware, the project isn’t always met with open arms. “These guys have done things the same way for a long time,” he tells Wired.
Interesting article further telling the story of Facebook’s Open Compute project. This part of the story concentrates on the mass storage needs of the social media company. Which means Wall Street data center designer/builders aren’t as enthusiastic about Open Compute as one might think. The old school Wall Streeters have been doing things the same way as Peter Krey says for a very long time. But that gets to the heart of the issue, what the members of the Open Compute project hope to accomplish. Rack Space AND Goldman Sachs are members, both contributing and getting pointers from one another. Rack Space is even beginning to virtualize equipment down to the functional level replacing motherboards with a Virtual I/O service. That would allow components to be ganged up together based on the frequency of their replacement and maintenance. According to the article, CPUs could be in one rack cabinet, DRAM in another, Disks in yet another (which is already the case now with storage area networks).
The newest item to come into the Open Compute circus tent is storage. Up until now that’s been left to Value Added Resellers (VARs) to provide. So different brand loyalties and technologies still hold sway for many Data Center shops including Open Compute. Now Facebook is redesigning the disk storage rack to create a totally tool-less design. No screws, no drive carriers, just a drive and a latch and that is it. I looked further into this tool-less phenomenon and found an interesting video at HP
Having recently purchased a similarly sized iMac 27″ and upgrading it by adding a single SSD drive into the case, I can tell you this HP Z1 demonstrates in every way possible the miracle of toolless designs. I was bowled over and remember back to some of my memories of different Dell tower designs over the years (some with more toolless awareness than others). If a toolless future is inevitable I say bring it on. And if Facebook ushers in the era of toolless Storage Racks as a central design tenet of Open Compute so much the better.
As reported by Andrew Cunningham for Anandtech: Weve known that Microsoft has been planning an ARM-compatible version of Windows since well before we knew anything else about Windows 8, but the particulars have often been obscured both by unclear signals from Microsoft itself and subsequent coverage of those unclear signals by journalists. Steven Sinofsky has taken to the Building Windows blog today to clear up some of this ambiguity, and in doing so has drawn a clearer line between the version of Windows that will run on ARM, and the version of Windows that will run on x86 processors.
That’s right ARM cpus are in the news again this time info for the planned version of Windows 8 for the mobile CPU. And it is a separate version of Windows OS not unlike Windows CE or Windows Mobile or Windows Embedded. They are all called Windows, but are very different operating systems. The product will be called Windows on ARM (WOA) and is only just now being tested internally at Microsoft with a substantial development and release to developers still to be announced.
One upshot of this briefing from Sinofsky was the mobile-centric Metro interface will not be the only desktop available on WOA devices. You will also be able to use the traditional looking Windows desktop and not incur a big battery power performance hit. Which makes it a little more palatable to a wider range of users no doubt who might consider buying a phone or tablet or Ultrabook running an ARM cpu running the new Windows 8 OS. Along the same lines there will be a version of Office apps that will also run on WOA devices including the big three Word, Excel and Powerpoint. These versions will be optimized for mobile devices with touch interfaces which means you should buy the right version of Office for your device (if it doesn’t come pre-installed).
Lastly the optimization and linking to specially built Windows on ARM devices means you won’t be able to install the OS on just ‘any’ hardware you like. Similar to Windows Mobile, you will need to purchase a device designed for the OS and most likely with a version pre-installed from the factory. This isn’t like a desktop OS built to run on many combos of hardware with random devices installed, it’s going to be much more specific and refined than that. Microsoft wants to really constrain and coordinate the look and feel of the OS on many mobile devices so that an average person can expect it to work similarly and look similar no matter who the manufacturer of the device will be. One engineering choice that is going to assist with this goal is an attempt to address the variations in devices by using so-called “Class Drivers” to support the chipsets and interfaces in a WOA device. This is a less device specific way of support say a display panel, keyboard without having to know every detail. A WOA device will have to be designed and built to a spec provided by Microsoft for which then it will provide a generic ‘class driver’ for that keyboard, display panel, USB 3.0 port, etc. So unlike Apple it won’t just be a limited set of hardware components necessarily, but they will have to meet the specs to be supported by the Windows on ARM OS. This no doubt will make it much easier for Microsoft to keep it’s OS up to date as compared to say in the Google Android universe where the device manufacturers have to provide the OS updates (which in fact is not often as they prefer people to upgrade their device to get the new OS releases).
And then the reveal: Mac OS X — sorry, OS X — is going on an iOS-esque one-major-update-per-year development schedule. This year’s update is scheduled for release in the summer, and is ready now for a developer preview release. Its name is Mountain Lion.1
Mountain Lion is the next iteration of Mac OS X. And while there are some changes since the original Lion was released just this past Summer, they are more like further improvements than real changes. I say this in part due to the concentration on aligning the OS X apps with iOS apps for small things like using the same name:
iCal versus Calendar
iChat versus Messages
Address book versus Contacts
Reminders versus Notes
etc.
Under the facial, superficial level more of the Carbonized libraries and apps are being factored out and being given full Cocoa libraries and app equivalents where possible. But one of the bigger changes, one that’s been slipping since the release of Mac OS X 10.7 is the use of ‘Sand-boxing’ as a security measure for Apps. The sand-box would be implemented by the Developers to adhere to strict rules set forth by Apple. Apps wouldn’t be allowed to do certain things anymore like writing to an external Filesystem, meaning saving or writing out to a USB drive without special privileges being asked for. Seems trivial at first but on the level of a day to day user of a given App it might break it altogether. I’m thinking of iMovie as an example where you can specify you want new Video clips saved into an Event Folder kept on an external hard drive. Will iMovie need to be re-written in order to work on Mountain Lion? Will sand-boxing hurt other Apple iApps as well?
Then there is the matter of ‘GateKeeper’ which is another OS mechanism to limit trust based on who the developer. Apple will issue security certificates to registered developers who post their software through the App Store, but independents who sell direct can also register for these certs as well, thus establishing a chain of trust from the developer to Apple to the OS X user. From that point you can choose to trust either just App store certified apps, independent developers who are Apple certified or unknown, uncertified apps. Depending on your needs the security level can be chosen according to which type of software you use. some people are big on free software which is the least likely to have a certification, but still may be more trustworthy than even the most ‘certified’ of AppStore software (I’m thinking emacs as an example). So sandboxes, gatekeepers all conspire to funnel developers into the desktop OS and thus make it much harder for developers of malware to infect Apple OS X computers.
These changes should be fully ready for consumption upon release of the OS in July. But as I mentioned sandboxing has been rolled back no less than two times so far. First roll-back occurred in November. The most recent rollback was here in February. The next target date for sandboxing is in June and should get all the Apple developers to get on board prior to the release of Mountain Lion the following month, in July. This reminds me a bit of the flexibility Apple had to show in the face of widespread criticism and active resistance to the Final Cut Pro X release last June. Apple had to scramble for a time to address concerns of bugs and stability under Mac OS X 10.7 (the previous Snow Leopard release seemed to work better for some who wrote on Apple support discussion forums). Apple quickly came up with an alternate route for dissatisfied customers who demanded satisfaction by giving copies of Final Cut Pro Studio 7 (with just the Final Cut Pro app included) to people who called up their support lines asking to substitute the older version of the software for a recent purchase of FCP X. Flexibility like this seems to be more frequent going forward which is great to see Apple’s willingness to adapt to an adverse situation of their own creation. We’ll see how this migration goes come July.
Great posting by Lucas Szyrmer @ softwaretrading.co.uk, it’s a nice summary of the story from last month about JP Morgan Chase’s use of FPGAs to speed up some of their analysis for risk. And it goes into greater detail concerning the mechanics of how to translate what one has to do in software across the divide into something that can be turned in VHDL/Verilog and written into the FPGA itself. It is in a word, a ‘non-trivial’ task, and can take quite a long time to get working.
Lately, I’ve been exploring a little known corner of high performance computing (HPC) known as FPGAs. Turns out, it’s time to get electrical on yowass (Pulp Fiction reference intentional). You can program these chips in the field, thus speeding up processing speeds dramatically, relative to generic CPUs. It’s possible to customize functionality to very specific needs.
Why this works
The main benefit of FPGAs comes from reorganizing calculations. FPGAs work on a massively parallel basis. You get rid of bottlenecks in typical CPU design. While these bottlenecks are good for general purpose applications, like watching Pulp Fiction, they significantly slow down the amount of calculations that you do per second. In addition to being massively multi-parallel, FPGAs also are faster, according to FPGAdeveloper, because:
you aren’t competing with your operating system or applications like anti-virus for CPU cycle time
you run at a lower level than the OS, so you doing have…