Tag: postaweek2011

  • Viking Modular plugs flash chips into memory sockets • The Register

    The 536,870,912 byte (512×2 20 ) capacity of t...
    Image via Wikipedia

    What a brilliant idea: put flash chips into memory sockets. Thats what Viking Modular is doing this with its SATADIMM product.

    via Viking Modular plugs flash chips into memory sockets • The Register.

    This sounds like an interesting evolution of the SSD type of storage. But, I don’t know if there is a big advantage forcing a RAM memory controller to be the bridge to a Flash Memory controller. In terms of bandwidth, the speed seems comparable to a 4x PCIe interface. I’m thinking now of how it might compare to PCIe based SSD from OCZ or Fusion-io. It seems like the advantage is still held by PCIe in terms of total bandwidth and capacity (above 500MB/sec and 2Terabytes total storage). It maybe a slightly lower cost, but the use of Single Level Cell Flash memory chips raises the cost considerably for any given size of storage, and this product from Viking uses the Single Level Cell flash memory. I think if this product ships, it will not compete very well against products like consumer level SSDs, PCIe SSDs, etc. However if they continue to develop the product and evolve it, there might be a niche where it can be performance or price competitive.

  • Facebook: No ‘definite plans’ to ARM data centers • The Register

    Image representing Facebook as depicted in Cru...
    Image via CrunchBase

    Clearly, ARM and Tilera are a potential threat to Intel’s server business. But it should be noted that even Google has called for caution when it comes to massively multicore systems. In a paper published in IEEE Micro last year, Google senior vice president of operations Urs Hölzle said that chips that spread workloads across more energy-efficient but slower cores may not be preferable to processors with faster but power-hungry cores.

    “So why doesn’t everyone want wimpy-core systems?” Hölzle writes. “Because in many corners of the real world, they’re prohibited by law – Amdahl’s law.

    via Facebook: No ‘definite plans’ to ARM data centers • The Register.

    The explanation given here by Google’s top systems person is that latency versus parallel processes overhead. Which means if you have to do all the steps in order, with a very low level of parallel tasks that results in much higher performance. And that is the measure that all the users of your service will judge you by. Making things massively parallel might provide the same level of response, but at a lower energy cost. However the complications due to communication and processing overhead to assemble all the data and send it over the wire will offset any advantage in power efficiency. In other words, everything takes longer and latency increases, and the users will deem your service to be slow and unresponsive. That’s the dilemna of Amdahl’s Law, the point of diminishing returns when adopting parallel computer architectures.

    Now compare this to something say we know much more concretely, like the Airline Industry. As the cost of tickets came down, the attempt to cut costs went up. Schedules for landings and gate assignments got more complicated and service levels have suffered terribly. No one is really all that happy about the service they get, even from the best airline currently operating. So maybe Amdahl’s Law doesn’t apply when there’s a false ceiling placed on what is acceptable in terms of the latency of a ‘system’. If airlines are not on time, but you still make your connection 99% of the time, who will complain? So by way of comparison there is a middle ground that may be achieved where more parallelizing of compute tasks will lower the energy required by a data center. It will require greater latency, and a worse experience for the users. But if everyone suffers equally from this and the service is not great but adequate, then the company will be able to cut costs through implementing more parallel processors in their data centers.

    I think Tilera holds a special attraction potentially for Facebook. Especially since Quanta their hardware assembler of choice is already putting together computers with the Tilera chip for customers now. It seems like this chain of associations might prove a way for Facebook to test the waters on a scale large enough to figure out the cost/benefits of massively parallel cpus in the data center. Maybe it will take another build out of a new data center to get there, but it will happen no doubt eventually.

  • Data hand tools – O’Reilly Radar

    A Shebang, also Hashbang or Sharp bang. This i...
    Image via Wikipedia

    Whenever you need to work with data, don’t overlook the Unix “hand tools.” Sure, everything I’ve done here could be done with Excel or some other fancy tool like R or Mathematica. Those tools are all great, but if your data is living in the cloud, using these tools is possible, but painful. Yes, we have remote desktops, but remote desktops across the Internet, even with modern high-speed networking, are far from comfortable. Your problem may be too large to use the hand tools for final analysis, but they’re great for initial explorations. Once you get used to working on the Unix command line, you’ll find that it’s often faster than the alternatives. And the more you use these tools, the more fluent you’ll become.

    via Data hand tools – O’Reilly Radar.

    This is a great remedial refresher on the Unix commandline and for me kind of reinforces an idea I’ve had that when it comes to computing We Live Like Kings. What? How is that possible, well think about what you are trying to accomplish and finding the least complicated quickest way to that point is a dying art. More often one is forced to follow or highly encouraged to set out on a journey with very well defined protocols/rituals included. You must use the APIs, the tools, the methods as specified by your group. Things falling outside that orthodoxy are frowned upon no matter what the speed and accuracy of the result. So doing it quick and dirty using some Shell scripting and utilities is going to be embarrassing for those unfamiliar with those same tools.

    My experience doing this involved a very low end attempt to split Web access logs into nice neat bits that began an ended on certain dates. I used grep, split, and a bunch of binaries I borrowed for doing log analysis and formatting the output into a web report. Overall it didn’t take much time, and required very little downloading, uploading,uncompressing,etc. It was all commandline based with all the output dumped to a directory on the same machine. I probably spent 20 minutes every Sunday running these by hand (as I’m not a cronjob master much less an atjob master). And none of the work I did was mission critical other than being a barometer of how much use the websites were getting from the users. I realize now I could have had the whole works automated with variables setup in the shell script to accommodate running on different days of the week, time changes, etc. But editing the scripts by hand in vi editor only made me quicker and more proficient in vi (which I still gravitate towards using even now).

    And as low end as my needs were and how little experience I had initially using these tools, I am grateful for the time I spent doing it. I feel so much more comfortable knowing I can figure out how to do these tasks on my own, pipe outputs into inputs for other utilities and get useful results. I think I understand it though I’m not a programmer, and couldn’t really leverage higher level things like data structures to get work done, no. I’m a brute force kind of guy and given how fast the CPUs are running, a few ugly, inefficient recursions isn’t going to kill me or my reputation. So here’s to Mike Loukides article and how much it reminds me of what I like about Unix.

  • Toshiba unwraps 24nm flash memory in possible iPhone 5 clue | Electronista

    The microcontroller on the right of this USB f...
    Image via Wikipedia

    The schedules may help back mounting beliefs that the iPhone 5 will 64GB iPhone 4 prototype appeared last month that hinted Apple was exploring the idea as early as last year. Just on Tuesday, a possible if disputed iPod touch with 128GB of storage also appeared and hinted at an upgrade for the MP3 player as well. Both the iPhone and the iPod have been stuck at 32GB and 64GB of storage respectively since 2009 and are increasingly overdue for additional space.

    via Toshiba unwraps 24nm flash memory in possible iPhone 5 clue | Electronista.

    Toshiba has revised its flash memory production lines again to keep pace with the likes of Intel, Micron and Samsung. Higher densities and smaller form factors seemed to indicate they are gearing up for a big production run of the highest capacity memory modules they can make. It’s looking like a new iPhone might be the candidate to receive newer multi-layer single chip 64GB Flash memory modules this year.

    A note of caution in this arms race of ever smaller feature sizes on the flash memory modules, the smaller you go the less memory read/write cycles you get. I’m becoming aware that each new generation of flash memory production has lost an amount of robustness. This problem has been camouflaged maybe even handled outright by the increase in over-provisioning of chips on a given size Solid State Disk (sometimes as low as 17% more chips than that which is typically used when the drive is full). Through careful statistical modeling and use of algorithms, an ideal shuffling of the deck of available flash memory chips allows the load to be spread out. No single chip fails as it’s workload is shifted continuously to insure it doesn’t receive anywhere near the maximum number of reliable read write cycles. Similarly, attempts to ‘recover’ data from failing memory cells within a chip module are also making up for these problems. Last but not least outright error-correcting hardware has been implemented on chip to insure everything just works from the beginning of the life of the Solid State Disk (SSD) to the finals days of its useful life.

    We may not see the SSD eclipse the venerable kind off high density storage, the Hard Disk Drive (HDD). Given the point of diminishing return provided by Moore’s Law (scaling down increases density, increases speed, lowers costs), Flash may never get down to the level of density we enjoy in a typical consumer brand HDD (2TBytes). We may have to settle for other schemes that get us to that target through other means. Which brings me to my favorite product of the moment, the PCIe based SSD. Which is nothing more than a big circuit board with a bunch of SSD’s tied together in a disk array with a big fat memory controller/error-correction controller sitting on it. In terms of speeds using the PCI Express bus, there are current products that beat single SATA 6 SSDs by a factor of two. And given the requirements of PCI, the form factor of any given module could be several times bigger and two generations older to reach the desired 2Terbyte storage of a typical SATA Hard Drive of today. Which to me sounds like a great deal if we could also see drops in price and increases in reliability by using older previous generation products and technology.

    But the mobile market is hard to please, as they are driving most decisions when it comes to what kind of Flash memory modules get ordered en masse. No doubt Apple, Samsung and anyone in consumer electronics will advise manufacturers to consistently shrink their chip sizes to increase density and keep prices up on final shipping product. I don’t know how efficiently an iPhone or iPad use the available memory say on a 64GByte iTouch let’s say. Most of that goes into storing the music, TV shows, and Apps people want to have readily available while passing time. The beauty of that design is it rewards consumption by providing more capacity and raising marginal profit at the same time. This engine of consumer electronics design doesn’t look likely to end in spite of the physical limitations of shrinking down Flash memory chips. But there will be a day of reckoning soon, not unlike when Intel hit the wall at 4Ghz serial processors and had to go multi-core to keep it’s marginal revenue flowing. It’s been very lateral progress in terms of processor performance since then. It is more than likely Flash memory chips cannot get any smaller without being really unreliable and defective, thereby sliding  into the same lateral incrementalism Intel has adopted. Get ready for the plateau.

  • Bye, Flip. We’ll Miss You | Epicenter | Wired.com

    Image representing Flip Video as depicted in C...
    Image via CrunchBase

    Cisco killed off the much-beloved Flip video camera Tuesday. It was an unglamorous end for a cool device that just few years earlier shocked us all by coming to dominate the video-camera market, utterly routing established players like Sony and Canon

    via Bye, Flip. We’ll Miss You | Epicenter | Wired.com.

    I don’t usually write about Consumer Electronics per se. This particular product category got my attention due to it’s long gestation and overwhelming domination of a category in the market that didn’t exist until it was created. It was the pocket video camera with a built-in flip out USB connector. Like a USB flash drive with a LCD screen, a lens and one big red button, the Flip pared down everything to the absolute essentials, including the absolute immediacy of online video sharing via YouTube and Facebook. Now the revolution has ended, devices have converged and many are telling the story of explaining Why(?) this has happened. In the case of Wired.com’s Robert Capps he claims Flip lost its way after Cisco lost its way doing the Flip 2 revision, trying to get a WiFi connected camera out there for people to record their ‘Lifestream’.

    Prior to Robert Capps, different writers for different pubs all spouted the conclusion of Cisco’s own Media Relations folks. Cisco’s Flip camera was the victim of inevitable convergence, pure and simple. Smartphones, in particular Apple’s iPhone kept adding features all once available only on the Flip. Easy recording, easy sharing, larger resolution, bigger LCD screen, and it could play Angry Birds too! I don’t cotton to that conclusion as fed to us by Cisco. It’s too convenient and the convergence myth does not account for the one thing Flip has the iPhone doesn’t have, has never had WILL never have. And that is a simple, industry standard connector. Yes folks convergence is not simply displacing cherry-picked features from one device and incorporating into yours, no. True convergence is picking up all that is BEST about one device and incorporating it, so that fewer and fewer compromises must be made. Which brings me to the issue of the Apple multi-pin connector that has been with us since the first iPod hit the market in 2002.

    See the Flip didn’t have a proprietary connector, it just had a big old ugly USB connector. Just as big and ugly as the one your mouse and keyboard use to connect to your desktop computer. The beauty of that choice was Flip could connect to just about any computer manufactured after 1998 (when USB was first hitting the market). The second thing was all the apps for making the Flip play back the videos you shot or to cut them down and edit them were sitting on the Flip, just like hard drive, waiting for you to install them on whichever random computer you wanted to use. Didn’t matter whether or not it had the software installed, it COULD be installed directly from the Flip itself. Isn’t that slick?! You didn’t have to first search for the software online, download and install, it was right there, just double-click and go.

    Compare this to the Apple iOS cul-de-sac we all know as iTunes. Your iPhone, iTouch, iPad, iPod all know your computer not through simply by communicating through it’s USB connector. You must first have iTunes installed AND have your proprietary Apple to USB connector to link-up. Then and only then can your device ‘see’ your computer and the Internet. This gated community provided through iTunes allows Apple to see what you are doing, market directly to you and watch as you connect to YouTube to upload your video. All with the intention of one day acting on that information, maintaining full control at each step along the path way from shooting to sharing your video. If this is convergence, I’ll keep my old Flip mino (non-HD) thankyou very much. Freedom (as in choice) is a wonderful thing and compromising that in the name of convergence (mis-recognized as convenience) is no compromise. It is a racket and everyone wants to sell you on the ‘good’ points of the racket. I am not buying it.

  • Quanta crams 512 cores into pizza box server • The Register

    Image representing Tilera as depicted in Crunc...
    Image via CrunchBase

    Two of these boards are placed side-by-side in the chassis and stacked two high, for a total of eight server nodes. Eight nodes at 64 cores each gives you 512 total cores in a 2U chassis. The server boards slide out on individual trays and share two 1,100 watt power supplies that are stacked on top of each other and that are put in the center of the chassis. Each node has three SATA II ports and can have three 2.5-inch drives allocated to it; the chassis holds two dozen drives, mounted in the front and hot pluggable.

    via Quanta crams 512 cores into pizza box server • The Register.

    Amazing how power efficient Tilera has made it’s shipping products as Quanta has jammed 512 cores into a 2 Rack Unit high box. Roughly this is 20% the size of the SeaMicro SM-10000 based on Intel Atom cpus. Now that there’s a shipping product, I would like to see benchmarks or comparisons made on similar workloads using both sets of hardware. Numerically speaking it will be an apples-to-apples comparison. But each of these products is unique and are going to be difficult to judge in the coming year.

    First off, Intel Atom is an x86 compatible low power chip that helped launch the Asus/Acer netbook revolution (which until the iPad killed it was a big deal). However Quanta in order to get higher density on its hardware has chosen a different CPU than the Intel Atom (as used by SeaMicro). Instead Quanta is the primary customer for a new innovated chip company we have covered on carpetbomberz.com previously: Tilera. For those who have not been following the press releases from the company Tilera is a spin-off of an MIT research project in chip-scale networking. The idea was to create very simplified systems on a chip (whole computers scaled down to single chip) and then network them together all the same slice of silicon die. The speeds would be faster due to most of the physical interfaces and buses being contained directly on the chip circuits instead of externally on the computer’s motherboard. The promise of the Tilera chip is you can scaled up on the silicon wafer as opposed to the racks and racks of equipment within the datacenter. Performance of the Tilera chip has been somewhat a secret, no benchmarks or real comparisons to commercially shipping CPUs have been performed. But the feeling generally is any single core within a Tilera chip should be about as capable as the processor in your smartphone, and every bit as power efficient. Tilera has been planning to scale up to 100 cpus eventually within one single processor die and appears to have scaled up to 64 on its most recent research chips (far from being commercially produced at this point.)

    I suspect both SeaMicro and Quanta will have their own custom OSes which run as a central supervisor allowing the administrators to install and sets up instances of their  favorite workhorse OSes. Each OS instance will be doled out to an available CPU core and then be linked up to a virtual network and virtual storage interface. Boom! You got a web server, file server, rendering station, streaming server, whatever you need in one fell swoop. And it is all bound together with two 1,100 watt power supplies in each 2 Rack Unit sized box. I don’t know how that compares to the SeaMicro power supply, but I imagine it is likely smaller per core than the SM-10000. Which can only mean in the war for data power efficiency Quanta might deliver to market a huge shot across the bow of SeaMicro. All I can say is let the games begin, let the market determine the winner.

  • Microsoft Research Watch: AI, NoSQL and Microsoft’s Big Data Future

    Image representing Microsoft as depicted in Cr...
    Image via CrunchBase

    Probase is a Microsoft Research project described as an “ongoing project that focuses on knowledge acquisition and knowledge serving.” Its primary goal is to “enable machines to understand human behavior and human communication.” It can be compared to  Cyc, DBpedia or Freebase in that it is attempting to compile a massive collection of structured data that can be used to power artificial intelligence applications.

    via Microsoft Research Watch: AI, NoSQL and Microsoft’s Big Data Future – ReadWriteCloud.

    Who knew Microsoft was so interested in things only IBM Research’s Watson could demonstrate? AI (artificial intelligence) seems to be targeted at Bing search engine results. And in order to back this all up, they have to ditch their huge commitment to Microsoft SQL Server and go for a NoSQL database in order to hold all the unstructured data. This seems like a huge shift away from desktop and data center applications and something much more oriented to a cloud computing application where collected data is money in the bank. This is best expressed in the example given in the story of Google vs. Facebook. Google may collect data, but it is really delivering ads to eyeballs. Whereas Facebook is just collecting the data and sharing that to the highest bidder. Seems like Microsoft is going the Facebook route of wanting to collect and own the data rather than merely hosting other people’s data (like Google and Yahoo).

  • OCZ Acquires Indilinx SSD Controller Maker

    OCZ Technology
    Image via Wikipedia

    Prior to SandForce‘s arrival, Indilinx was regarded as the leading makers of controllers for solid-state drives. The company gained both consumer and media favoritism when it demonstrated that drives based on its own controllers were competitive with lead drives made by Intel. Indilinx’s controllers allowed many SSD manufacturers to bring SSD prices down to a level where a large number of mainstream consumers started to take notice.

    via OCZ Acquires Indilinx SSD Controller Maker.

    This is surprising news especially following the announcement and benchmark testing of OCZ’s most recent SSD drives. They are the highest performing SATA based SSDs on the market and the boost in speed is derived primarily from their drive controller chip supplied by SandForce not Indilinx. Buying a competing manufacturer no doubt is going to disappoint their suppliers at SandForce. And I worry a bit that SandForce’s technical lead is something that even a good competitor like Indilinx won’t be able to overcome. I’m sticking with any drive that has the SandForce disk controller inside due to their track record of increasing performance and reliability with each new generation of product.

    So I am of two minds, I guess it’s cool OCZ has enough power and money to provide its own drive controllers for its SSDs. But at the same time, the second place drive controller is a much slower, lower performance part than the top competitor. In future I hope OCZ is either able to introduce price variation by offering up SandForce vs. Indilinx based SSDs and charge less for Indilinx. If not, I don’t know how they will technologically achieve superiority now that SandForce has such a lead.

  • Calxeda boasts of 5 watt ARM server node • The Register

    Calxeda is not going to make and sell servers, but rather make chips and reference machines that it hopes other server makers will pick up and sell in their product lines. The company hopes to start sampling its first ARM chips and reference servers later this year. The first reference machine has 120 server nodes in a 2U rack-mounted format, and the fabric linking the nodes together internally can be extended to interconnect multiple enclosures together.

    via Calxeda boasts of 5 watt ARM server node • The Register.

    SeaMicro and now Calxeda are going gangbusters for the ultra dense low power server market. Unlike SeaMicro, Calxeda wants to create reference designs it licenses to manufacturers who will build machines with 120 cores in a 2 Unit rack. SeaMicro’s record right now is 512 cores per 10U rack  or roughly 102+ cores in a 2 Unit rack. The difference is the SeaMicro product uses an Intel low power Atom cpu,  whereas Calxeda is using a processor used more often in smart phones and tablet computers. SeaMicro has hinted they are not wedded to the Intel Architecture, but they are more interested in shipping real live product than coming up with generic designs others can license. In the long run it’s entirely possible SeaMicro may switch to a different CPU, they have indicated previously they have designed their servers with flexibility enough to swap out the processor to any other CPU if necessary. It would be really cool to see an apples-to-apples comparison of a SeaMicro server using first Intel CPUs versus ARM-based CPUs.

  • AppleInsider | Insider Mac OS X 10.7 Lion: Auto Save, File Versions and Time Machine

    Original 1984 Macintosh desktop
    Image via Wikipedia

    However, Windows’ Shadow Copy is really intended for creating a snapshot of an entire volume for backup purposes; users can’t trigger the creation of a new version of an individual file in Windows. This makes Lion’s Versions a very different beast: its more akin to a versioning file system that works like Time Machine, but local to the user’s own disk.

    via AppleInsider | Insider Mac OS X 10.7 Lion: Auto Save, File Versions and Time Machine [Page 2].

    Reading this article from Apple Insider’s series of previews of Mac OS X 10.7 has been an education in both the iOS based universe and the good ol’ desktop universe I already know and love. At first I was apprehensive about the desktop OS taking such a back seat to the mobile devices Apple has been introducing at an increasingly fast pace. From iPods to iPhones to iPod Touch and now the iPad, there’s no end to the permutations iOS based devices can take. Prior to the iPhone and iPod Touch releases, Apple was using an embedded OS with none of the sophistication and capability of a real desktop operating system. This was both a frugal and conservative approach as media players while having real CPUs inside were never intended to have network stacks, garbage collection on UI servers, etc. There was always enough there to present a User Interface off some sort, with access to a local file system and ability to sync files between a host based iTunes client and the device (whichever generation iPod it might be). Along with that each generation hardware most likely varied by degrees as video playback  became a touted feature in newer iPods with bigger internal hard drives (so-called video ipods). I can imagine that got complicated quickly as CPU and video chips and media playback capabilities ranged widely up and down the product line. As each device required its own tweaks to the embedded OS, and iTunes was tweaked to accommodate these local variations, I’m sure the all seeing eye of Steve Jobs began to wince at the increasing complexity of the iPod product line. Enter the iOS, a smaller, cleaner fully optimized OS for low power mobile devices. It’s got everything a desktop OS has without any of the legacy device concerns (backward compatibility) of a typical desktop OS. This allowed for creating ‘just enough’ capability in the networking capability the UI Server and the local storage. Apps written for iOS were unique to that environment though they might have started out as Mac OS X apps. By taking the original code base, re-factoring it and doing complete low level rewrites from top to bottom, you got a version of the Safari web browser on a mobile device. It could display ANY webpage and kind of do some display optimizations of the page on the fly. And there were a number of developers rushing to get an app to run on the new devices. So wither the Apple Mac OS X?

    Well in the rush of creating an iOS app universe, the iOS development team added many features along the way. One great gap was the missing cut & paste analogy long enjoyed on desktop OSes. Eventually this feature made it in, and others like it slowly got integrated. Apple’s custom A4 chip using and ARM Core 8 cpu was tearing up the charts, out competing every other mobile phone OS on the market. Similarly the iPad took that same approach of getting out there with new features and becoming a more desktop like mobile device. A year has passed since the original iPad hit the market, the Mac OS is due for a change, the big question is what does Steve Jobs think? There were hints and rumors he wanted everyone to enjoy the clean room design of the iOS, dump the legacy messiness of old Mac OS X. Dan Lyons of Newsweek gave voice to these concerns quite clearly in his June 8 article in Newseek. Steve Jobs would eventually reply directly to this author and state emphatically he was wrong. Actions speak louder than words, Apple’s World Wide Developer Conference in 2010 seemed to really hard sell the advantages of developing for the new iOS. Conversely, Microsoft has proven over and over again, legacy support in an OS is a wonderful source of income, once you have established your monopoly. However, Apple has navigated the legacy hardware seas before with its first big migration from Motorola 68000 processors to the PowerPC chip, then subsequently the migration from PowerPC to Intel chips. From a software standpoint attrition occurs as people dump their legacy hardware anyways (not uncommon amongst Apple users to eventually get rid of their older hardware). So to help deliver the benefit of newer software requirements are now fully in place that even certain first gen Intel based Macs won’t be able to run the newest Mac OS X (that’s the word now). Similarly legacy support for PowerPC native apps running under Intel in emulation (using the Rosetta software) will also go away. Which then brings us to the point of this whole blog posting, where’s the beef?

    The beef dear reader is not in the computers but in ourselves. As Macintosh OSes evolve so do the workflow and the new paradigm being foisted upon us through the use of mobile devices is the lack of need to go to the File Menu -> Choose Save or Save As… That’s what the new iOS design portends in the future. Same goes for open documents in process, everything is done for you at long last. The computer does what finally you thought it did all the time and what Microsoft eventually built into Word (not the OS itself), Autosave. Newly developed versions of TextEdit made by Apple to run under OS X 10.7 were tested and tried out to see how they work under the new Auto Save and Versions architecture. Now, you just make a new document and the computer (safely) assumes you will most likely want to save the document as you are working on it, and you may want to go back and undo some changes you made. After all these years of using desktop computers, this is now built right in at long last. So from the commandline to the GUI and now to the Mobile OS, computer architects and UI engineers have a good idea of what you might want to do before you choose to do it, and it’s built in at the lowest level of the OS finally! And all of these are going to be in the next version of Mac OS X, due for release this July, 2011. After reading these articles from AppleInsider looking at the screenshots, I’m way more enthused and willing to change and adapt the way I work to the new regime of hybrid iOS and MacOS X going forward.