Category: technology

General technology, not anything in particular

  • AppleInsider | Insider Mac OS X 10.7 Lion: Auto Save, File Versions and Time Machine

    Original 1984 Macintosh desktop
    Image via Wikipedia

    However, Windows’ Shadow Copy is really intended for creating a snapshot of an entire volume for backup purposes; users can’t trigger the creation of a new version of an individual file in Windows. This makes Lion’s Versions a very different beast: its more akin to a versioning file system that works like Time Machine, but local to the user’s own disk.

    via AppleInsider | Insider Mac OS X 10.7 Lion: Auto Save, File Versions and Time Machine [Page 2].

    Reading this article from Apple Insider’s series of previews of Mac OS X 10.7 has been an education in both the iOS based universe and the good ol’ desktop universe I already know and love. At first I was apprehensive about the desktop OS taking such a back seat to the mobile devices Apple has been introducing at an increasingly fast pace. From iPods to iPhones to iPod Touch and now the iPad, there’s no end to the permutations iOS based devices can take. Prior to the iPhone and iPod Touch releases, Apple was using an embedded OS with none of the sophistication and capability of a real desktop operating system. This was both a frugal and conservative approach as media players while having real CPUs inside were never intended to have network stacks, garbage collection on UI servers, etc. There was always enough there to present a User Interface off some sort, with access to a local file system and ability to sync files between a host based iTunes client and the device (whichever generation iPod it might be). Along with that each generation hardware most likely varied by degrees as video playback  became a touted feature in newer iPods with bigger internal hard drives (so-called video ipods). I can imagine that got complicated quickly as CPU and video chips and media playback capabilities ranged widely up and down the product line. As each device required its own tweaks to the embedded OS, and iTunes was tweaked to accommodate these local variations, I’m sure the all seeing eye of Steve Jobs began to wince at the increasing complexity of the iPod product line. Enter the iOS, a smaller, cleaner fully optimized OS for low power mobile devices. It’s got everything a desktop OS has without any of the legacy device concerns (backward compatibility) of a typical desktop OS. This allowed for creating ‘just enough’ capability in the networking capability the UI Server and the local storage. Apps written for iOS were unique to that environment though they might have started out as Mac OS X apps. By taking the original code base, re-factoring it and doing complete low level rewrites from top to bottom, you got a version of the Safari web browser on a mobile device. It could display ANY webpage and kind of do some display optimizations of the page on the fly. And there were a number of developers rushing to get an app to run on the new devices. So wither the Apple Mac OS X?

    Well in the rush of creating an iOS app universe, the iOS development team added many features along the way. One great gap was the missing cut & paste analogy long enjoyed on desktop OSes. Eventually this feature made it in, and others like it slowly got integrated. Apple’s custom A4 chip using and ARM Core 8 cpu was tearing up the charts, out competing every other mobile phone OS on the market. Similarly the iPad took that same approach of getting out there with new features and becoming a more desktop like mobile device. A year has passed since the original iPad hit the market, the Mac OS is due for a change, the big question is what does Steve Jobs think? There were hints and rumors he wanted everyone to enjoy the clean room design of the iOS, dump the legacy messiness of old Mac OS X. Dan Lyons of Newsweek gave voice to these concerns quite clearly in his June 8 article in Newseek. Steve Jobs would eventually reply directly to this author and state emphatically he was wrong. Actions speak louder than words, Apple’s World Wide Developer Conference in 2010 seemed to really hard sell the advantages of developing for the new iOS. Conversely, Microsoft has proven over and over again, legacy support in an OS is a wonderful source of income, once you have established your monopoly. However, Apple has navigated the legacy hardware seas before with its first big migration from Motorola 68000 processors to the PowerPC chip, then subsequently the migration from PowerPC to Intel chips. From a software standpoint attrition occurs as people dump their legacy hardware anyways (not uncommon amongst Apple users to eventually get rid of their older hardware). So to help deliver the benefit of newer software requirements are now fully in place that even certain first gen Intel based Macs won’t be able to run the newest Mac OS X (that’s the word now). Similarly legacy support for PowerPC native apps running under Intel in emulation (using the Rosetta software) will also go away. Which then brings us to the point of this whole blog posting, where’s the beef?

    The beef dear reader is not in the computers but in ourselves. As Macintosh OSes evolve so do the workflow and the new paradigm being foisted upon us through the use of mobile devices is the lack of need to go to the File Menu -> Choose Save or Save As… That’s what the new iOS design portends in the future. Same goes for open documents in process, everything is done for you at long last. The computer does what finally you thought it did all the time and what Microsoft eventually built into Word (not the OS itself), Autosave. Newly developed versions of TextEdit made by Apple to run under OS X 10.7 were tested and tried out to see how they work under the new Auto Save and Versions architecture. Now, you just make a new document and the computer (safely) assumes you will most likely want to save the document as you are working on it, and you may want to go back and undo some changes you made. After all these years of using desktop computers, this is now built right in at long last. So from the commandline to the GUI and now to the Mobile OS, computer architects and UI engineers have a good idea of what you might want to do before you choose to do it, and it’s built in at the lowest level of the OS finally! And all of these are going to be in the next version of Mac OS X, due for release this July, 2011. After reading these articles from AppleInsider looking at the screenshots, I’m way more enthused and willing to change and adapt the way I work to the new regime of hybrid iOS and MacOS X going forward.

  • OCZ Vertex 3 Preview – AnandTech

    UEFI Logo
    Image via Wikipedia

    The main categories here are SF-2100, SF-2200, SF-2500 and SF-2600. The 2500/2600 parts are focused on the enterprise. They’re put through more aggressive testing, their firmware supports enterprise specific features and they support the use of a supercap to minimize dataloss in the event of a power failure. The difference between the SF-2582 and the SF-2682 boils down to one feature: support for non-512B sectors. Whether or not you need support for this really depends on the type of system it’s going into. Some SANs demand non-512B sectors in which case the SF-2682 is the right choice.

    via OCZ Vertex 3 Preview: Faster and Cheaper than the Vertex 3 Pro – AnandTech :: Your Source for Hardware Analysis and News.

    The cat is out of the bag, OCZ has not one but two SandForce SF-2000 series based SSDs out on the market now. And performance-wise the consumer level product is even slightly better performing than the enterprise level product at less cost. These indeed are interesting times. The speeds are so fast with the newer SandForce drive controllers that with a SATA 6GB/s drive interface you get speeds close to what could only be purchased on a PCIe based SSD drive array for $1200 or so. The economics of this is getting topsy-turvy, new generations of single drives outdistancing previous top-end products (I’m talking about you Fusion-io and you Violin Memory). SandForce has become the drive controller for the rest of us and with speeds like this 500MB/sec. read and write what more could you possibly ask for? I would say the final bottleneck on the desktop/laptop computer is quickly vanishing and we’ll have to wait and see just how much faster the SSD drives become. My suspicion is now a computer motherboard’s BIOS will slowly creep up to be the last link in the chain of noticeable computer speed. Once we get a full range of UEFI motherboards and fully optimized embedded software to configure them we will have theoretically the fastest personal computers one could possibly design.

  • TidBITS Macs & Mac OS X: Apple Reveals More about Mac OS X Lion

    Image representing Apple as depicted in CrunchBase
    Image via CrunchBase

    Finally, despite Apple’s dropping of the Xserve line (see “A Eulogy for the Xserve: May It Rack in Peace,” 8 November 2010), Mac OS X Server will make the transition to Lion, with Apple promising that the new version will make setting up a server easier than ever. That’s in part because Lion Server will be built directly into Lion, with software that guides you through configuring the Mac as a server. Also, a new Profile Manager will add support for setting up and managing Mac OS X Lion, iPhone, iPad, and iPod touch devices. Wiki Server 3 will offer improved navigation and a new Page Editor. And Lion Server’s WebDAV support will provide iPad users the ability to access, copy, and share server-based documents.

    via TidBITS Macs & Mac OS X: Apple Reveals More about Mac OS X Lion.

    Here’s to seeing a great democratization of OS X Server once and for all time. While Apple did deserve to make some extra cash on a server version of the OS, I’m sure it had very little impact on their sales overall (positive or negative). However, including/bundling it with the base level OS and letting it be unlocked (for money or for free) can only be a good thing. Where I work I already run a single CPU 4core Intel Xserve. I think I should buy some cheap RAM and max out the memory and upgrade this Summer to OS X Lion Server.

  • SeaMicro drops 64-bit Atom bomb server • The Register

    Image representing SeaMicro as depicted in Cru...
    Image via CrunchBase

    The base configuration of the original SM10000 came with 512 cores, 1 TB of memory, and a few disks; it was available at the end of July last year and cost $139,000. The new SM10000-64 uses the N570 processors, for a total of 256 chips but 512 cores, the same 1 TB of memory, eight 500 GB disks, and eight Gigabit Ethernet uplinks, for $148,000. Because there are half as many chipsets on the new box compared to the old one, it burns about 18 percent less power, too, when configured and doing real work.

    via SeaMicro drops 64-bit Atom bomb server • The Register.

    I don’t want to claim that Seamicro is taking a page out of the Apple playbook, but keeping your name in the Technology News press is always a good thing. I have to say it is a blistering turnaround time to release a second system board for the SM10000 server so quickly. And knowing they do have some sales to back up the need for further development makes me thing this company really could make a  go of it. 512 CPU cores in a 10U rack is still a record of some sort and I hope to see one day Seamicro publish some white papers and testimonials from their current customers to see what killer application this machine has in the data center.

  • Showcase Your Skills & Analyze Which Skills Are Trending With LinkedIn’s New Tool

    Image representing LinkedIn as depicted in Cru...
    Image via CrunchBase

    Professional network LinkedIn has just introduced the beta launch of a new feature LinkedIn Skills, a way for you to search for particular skills and expertise, and of course, showcase your own and in LinkedIn’s words, “a whole new way to understand the landscape of skills & expertise, who has them, and how it’s changing over time.”

    via Showcase Your Skills & Analyze Which Skills Are Trending With LinkedIn’s New Tool.

    It may not seem that important at first, especially if people don’t keep their profiles up to date in LinkedIn. However, for the largest number of ‘new’ users that are in the job market actively seeking positions, I’m hoping those data are going to be more useful. Those might be worth following over time to see what demand there is for those skills in the market place. That is the promise at least. My concern though is just as grades have inflated over time at most U.S. Universities, skills too will be overstated, lied about and be very untrustworthy as people try to compete with one another on LinkedIn.

  • OpenID: The Web’s Most Successful Failure|Wired.com

    First 37Signals announced it would drop support for OpenID. Then Microsoft’s Dare Obasanjo called OpenID a failure (along with XML and AtomPub). Former Facebooker Yishan Wong’s scathing (and sometimes wrong) rant calling OpenID a failure is one of the more popular answers on Quora.

    But if OpenID is a failure, it’s one of the web’s most successful failures.

    via OpenID: The Web’s Most Successful Failure | Webmonkey | Wired.com.

    I was always of the mind that said Single Sign-on is a good thing, not bad. And any service whether it be for work or outside of work that can re-use an identifier and authentication, or whatnot should make things easier to manage and possibly be more secure in the long run. There are proponents for and against anything that looks or acts like a single sign-on. Detractors always argue that if one of the services gets hacked they somehow can gain access to your password and identity and hack in to your accounts on all the other systems out there. In reality with a typical single sign-on service you don’t ever send a password to the place your logging into (unless it’s the source of record like the website that hosts your OpenID). Instead you send something more like a scrambled message that only you could have originated and which the website you’re logging into will be able to descramble. And the message it is sending is based on your OpenID provider, the source of record for your identity online. So nobody is storing your password, nobody is able to hack into all your other accounts when they hijack your favorite web service.

    Where I work I was a strong advocate for centralized identity management like OpenID. Some people thought the only use for this was as a single sign-on service. But real centralize identity management also encompasses the authorizations you have once you have declared and authenticated your identity. And it’s the authorization that is key to what is really useful for a Single Sign-on service.

    I may be given a ‘role’ within someone’s website or page on a social networking website that either adds or takes away levels off privacy to the person who has declared me as a ‘friend’. And if they wanted to ‘redefine’ my level of privilege, all they would have to do is change privileges for that ‘role’ not for me personally and all my levels of access would change accordingly. Why? Because a role is kind off like a rank or group membership. Just like everyone in the army who is an officer can enjoy benefits like attending an officers club because they have the role, officer. I can see more of a person’s profile or personal details because I have been declared a friend. Nowhere in this is it absolutely necessary to define specific restrictions, levels of privilege to me Individually! It’s all based on my membership in a group. And if someone wants to eliminate that group or change the permissions to all members of the group, they do it once, and only once to the definition of that role, and it rolls out, cascades out to all the members after that point. So OpenID can be authentication (which is what most people stop at) and it can additionally be authorization (what am I allowed and not allowed to do once I prove who I am). It’s a very powerful and poorly understood capability.

    The widest application I’ve seen so far using something like OpenID is the Facebook ‘sign-on’ service that allows you to make comments to articles on news websites and weblogs. Disqus is a third party provider that acts as a hub to anyone that wants to re-use someone’s Facebook or OpenID credentials to prove that they are real and not a rogue spambot. That chain of identity is maintained by Disqus providing the plumbing back to whichever of the many services someone might be subscribed to or participate in. I already have an OpenID but I also have a Facebook account. Disqus will allow me to use either one. Given how much information might be passed along by Facebook through a third party (something they are notorious for allowing Applications to do) I chose to use my OpenID which more or less says I am X user at X website and I am the owner of that website as well. A chain of authentications just good enough to allow me to make comments on an article is what OpenID provides. Not too much information, just enough information travels back and forth. And because of this absolute precision, abolishing all the unneeded private detail or having to create an account on the website hosting the article, I can just freely come and go as I please.

    That is the lightweight joy of OpenID.

  • Dave Winer’s EC2 for poets | Wired.com

    Dave Winer
    Image via Wikipedia

    Winer wants to demystify the server. “Engineers sometimes mystify what they do, as a form of job security,” writes Winer, “I prefer to make light of it… it was easy for me, why shouldn’t it be easy for everyone?”

    via A DIY Data Manifesto | Webmonkey | Wired.com.

    Dave Winer believes Amazon’s Elastic Compute Cloud (EC2) is the path towards a more self reliant, self actualizing future for anyone who keeps any of their data on the Internet. So he proposes a project entitled EC2 for Poets. Having been a user of Dave’s blogging software in the past, Radio Userland, I’m very curious as to what the new project looks like.

    Back in the old days I paid $40 to Frontier for the privilege of reading and publishing my opinions on articles I subscribed to through the Radio Userland client. It was a great RSS reader at the time and I loved being able to clip and snip out bits of articles and embed my comments around them. I then subsequently moved on to Bloglines and now Google Reader exactly in that order. Now I use WordPress to keep my comments and article snippets organized and published on the Web.

  • iPod classic stock dwindling at Apple

    iPod classic
    Image by Freimut via Flickr

    Apple could potentially upgrade the Classic to use a recent 220GB Toshiba drive, sized at the 1.8 inches the player would need.

    via iPod classic stock dwindling at Apple, other retailers | iPodNN.

    Interesting indeed, it appears Apple is letting supplies run low for the iPod Classic. No word immediately as to why but there could be a number of reasons as speculated in this article. Most technology news websites understand the divide between the iPhone/Touch operating system and all the old legacy iPod devices (an embedded OS that only runs the device itself). Apple would like to consolidate its consumer products development efforts by slowly winnowing out non-iOS based ipods. However, due to the hardware requirements demanded by iOS, Apple will be hard pressed to jam such a full featured bit of software into iPod nano and iPod shuffles. So whither the old click wheel interface iPod empire?

  • Toshiba rolls out 220GB-Could the iPod Classic see a refresh?

    first generation iPod
    Image via Wikipedia

    Toshiba Storage Device Division has introduced its MKxx39GS series 1.8-inch spinning platter drives with SATA connectors.

    via Toshiba rolls out 220GB, extra-compact 1.8-inch hard drive | Electronista.

    Seeing this announcement reminded me a little of the old IBM Microdrive. A 1.8″ wide spinning disk that fit into a Compact Flash sized form factor (roughly 1.8″ square). Those drives were at the time 340MB and astoundingly dense storage format that digital photographs gravitated to very quickly. Eventually this Microdrive was improved up to around 1GByte per drive in the same small form factor. Eventually the market for this storage dried up as smaller and smaller cameras became available with larger and larger amounts of internal storage and slots for removable storage like Sony’s Memory stick format or the SD Card format. The Microdrive was also impeded by a very high cost per MByte versus other available storage by the end of its useful lifespan.

    But no one knows what new innovative products might hit the market. Laptop manufacturers continued to improve on their expansion bus known as PCMCIA, PC Card and eventually Card Bus. The idea was you could plug any kind of device you wanted into that expansion bus connect to a a dial-up network, a wired Ethernet network or a Wireless network. Card Bus was 32-bit clean and designed to be as close to the desktop PCI expansion bus as possible. Folks like Toshiba were making small hard drives that would fit the tiny dimensions of that slot, containing all the drive electronics within the Card Bus card itself. Storage size improved as the hard drive market itself improved the density of it’s larger 2.5″ and 3.5″ desktop hard drive product.

    I remember the first 5GByte Card Bus hard drive and marveling at how far folks at Toshiba and Samsung had outdistanced IBM. Followed soon after by the 10GByte drive. However just as we wondered how cool this was, Apple created a copy of a product being popularized by a company named Rio. It was a new kind of hand held music player that primarily could play back audio .mp3 files. It could hold 5GBytes of music (compared to 128MBytes and 512MBytes for most top of the line Rio products at the time). It had a slick, and very easy to navigate interface with a spinning wheel you could click down on with the thumb of your hand. Yes it was the first generation iPod and it demanded a large quantity of those little bitty hard drives Samsung and Toshiba were bringing to market.

    Each year storage density would increase and a new generation of drives would arrive. Each year a new iPod would hit the market taking advantage of the new hard drives. The numbers seemed to double very quickly. 20Gig, 30Gig-the first ‘video’ capable iPod, 40Gig,60gig,120gig and finally today the iPod Classic at a whopping 160GBytes of storage! And then the great freeze, the slowdown and transition to Flash memory based iPods which were mechanically solid state. No moving parts, no chance for mechanical failure, no loss of data and speeds unmatched by any hard drive of any size currently on the market. The Flash storage transition also meant lower power requirements, longer battery time and now for the first time the real option of marrying a cell phone with your iPod (I do know there was an abortive attempt to do this on a smaller scale with Motorola phones @ Cingular). The first two options were 4GB and 8GB iPhones using the solid state flash memory. So wither the iPod classic?

    iPod Classic is still on the market for those wishing to pay slightly less than the price for an iPod touch. You get much larger amount of total storage (video and audio both) but things have stayed put at 160GBytes for a very long time now. Manufacturers like Toshiba haven’t come out with any new product seeing the end in sight for the small 1.8″ hard drive. Samsung dropped it’s 1.8″ hard drives altogether seeing where Apple was going with its product plan. So I’m both surprised and slightly happy to see Toshiba soldier onward bringing out a new product. I’m thinking Apple should really do a product refresh on the iPod classic. They could also add iOS as a means of up-scaling and up-marketing the device to people who cannot afford the iPod Touch, leaving the price right where it is today.

  • IBM Teams Up With ARM for 14-nm Processing

    iPad, iPhone, MacBook Pro
    Big, Little & Little-est!

    Monday IBM announced a partnership with UK chip developer ARM to develop 14-nm chip processing technology. The news confirms the continuation of an alliance between both parties that launched back in 2008 with an overall goal to refine SoC density, routability, manufacturability, power consumption and performance.

    via IBM Teams Up With ARM for 14-nm Processing.

    Interesting that IBM is striking out so far away from the current state of the art processing node for silicon chips. 22nm or there abouts is the what most producers of flash memory are targeting for their next generation product. Smaller sizes mean more chips per wafer, higher density means storage sizes go up for both flash drives and SSDs without increasing in physical size (who wants to use brick sized external SSDs right?). Too, it is interesting that ARM is the partner with IBM for their farthest target yet in chip production design rule sizes. But it appears that System-on-Chip (SoC) designers like ARM are now state of the art producers of power and waste heat optimized computing. Look at Apple’s custom A4 processor for the iPad and iPhone. That chip has lower power requirements than any other chip on the market. It is currently leading the pack for battery life in the iPad (10 hours!). So maybe it does make sense to choose ARM right now as they can benefit the most and the fastest from any shrink in the size of the wire traces used to create a microprocessor or a whole integrated system on a chip. Strength built on strength, that’s a winning combination and shows that IBM and ARM have an affinity for the lower power consumption future of cell phone and tablet computing.

    But consider this also, the last article I wrote about Tilera’s product plans regarding cloud computing in a box. ARM chips could easily be the basis for much lower power, much higher density computing clouds. Imagine a GooglePlex style datacenter running ARM CPUs on cookie trays instead of commodity Intel parts. That’s a lot of CPUs and a lot less power draw, both big pluses for a Google design team working on a new data center. True, legacy software concerns might over rule a switch to lower power parts. But if the cost of electricity would offset the opportunity cost of switching to a new CPU (an having to re-compile software for the new chip) then Google would be crazy not to seize up on this.