Category: blogroll

This is what I subscribe to myself

  • Hope for a Tool-Less Tomorrow | iFixit.org

    I’ve seen the future, and not only does it work, it works without tools. It’s moddable, repairable, and upgradeable. Its pieces slide in and out of place with hand force. Its lid lifts open and eases shut. It’s as sleek as an Apple product, without buried components or proprietary screws.

    via Hope for a Tool-Less Tomorrow | iFixit.org.HP Z1 worstation

    Oh how I wish this were true today for Apple. I say this as a recent purchaser of a Apple re-furbished iMac 27″. My logic and reasoning for going with the refurbished over new was based on a few bits of knowledge gained reading Macintosh weblogs. The rumors I read included the idea that Apple repaired items are strenuously tested before being re-sold. In some cases return items are not even broken, they are returns based on buyers remorse or cosmetic problems. So there’s a good chance the logic board and lcd have no problems. Now reading back this Summer just after the launch of Mac OS X 10.7 (Lion), I read about lots of problems with crashes off 27″ iMacs. So I figured a safer bet would be to get a 21″ iMac. But then I started thinking about Flash-based Solid State Disks. And looking at the prohibitively high prices Apple charges for their installed SSDs, I decided I needed something that I could upgrade myself.

    But as you may know iMacs over time have never been and continue to remain not user up-gradable. However, that’s not to say people haven’t tried or succeeded in upgrading their own iMacs over the years. Enter the aftermarket for SSD upgrades. Apple has attempted to zig and zag as the hobbyists swap in newer components like larger hard drives and SSDs. Witness the Apple temperature sensor on the boot drive in the 27″ iMac, where they have added a sensor wire to measure the internal heat of the hard drive. As the Mac monitors this signal it will rev-up the internal fans. Any iMac hobbyist attempting to swap out a a 4TByte or 3TByte drive for the stock Apple 2TByte drive will suffer the inevitable panic mode of the iMac as it cannot see its temperature sensor (these replacement drives don’t have the sensor built-in) and assumes the worst. They say the noise is deafening when those fans speed up, and they never, EVER slow down. This Apple’s attempt insure sanctity through obscurity. No one is allowed to mod or repair, and that means anyone foolish enough to attempt to swap their internal hard drive on the iMac.

    But, there’s a workaround thank goodness and that is the 27″ iMac whose internal case is just large enough to install a secondary hard drive. You can slip a 2.5″ SSD into that chassis. You just gotta know how to open it up. And therein lies the theme of this essay, the user upgradable, user friendly computer case design. The antithesis of this idea IS the iMac 27″ if you read these steps from iFixit and the photographer Brian Tobey. Both of these websites make clear the excruciating minutiae of finding and disconnecting the myriad miniature cables that connect the logic board to the computer. Without going through those steps one cannot gain access to the spare SATA connectors facing towards the back of the iMac case. I decided to go through these steps to add an SSD to my iMac right after it was purchased. I thought Brian Tobey’s directions were just slightly better and had more visuals pertinent to the way I was working on the iMac as I opened up the case.

    It is in a word a non-trivial task. You need the right tools, the right screwdrivers. In fact you even need suction cups! (thankyou Apple). However there is another way, even for so-called All-in-One style computer designs like the iMac. It’s a new product from Hewlett-Packard targeted for the desktop engineering and design crowd. It’s an All-in-One workstation that is user upgradable and it’s all done without any tools at all. Let me repeat that last bit again, it is a ‘tool-less’ design. What you may ask is a tool-less design? I hadn’t heard of it either until I read this article in iFixit. And after having followed the links to the NewEgg.com website to see what other items were tagged as ‘tool-less’ I began to remember some hints and stabs at this I had seen in some Dell Optiplex desktops some years back. The ‘carrier’ bracket for the CD/DVD and HDD drive bays were these green plastic rails that just simply ‘pushed’ into the sides of the drive (no screws necessary).

    And when I considered my experience working with the 27″ iMac actually went pretty well (it booted up the first time no problems) after all I had done to it, I consider myself very lucky. But it could have been better. And there’s no reason it cannot be better for EVERYONE. It also made me think of the XO Laptop (One Laptop Per Child project) and I wondered how tool-less that laptop might be. How accessible are any of these designs? And it also made me recall the Facebook story I recently commented on about how Facebook is designing its own hard drive storage units to make them easier to maintain (no little screws to get lost and dropped onto a fully powered motherboard and short things out). So I much more hope than when I first embarked on the do it yourself journey of upgrading my iMac. Tool-less design today, Tool-less design tomorrow and Tool-less design forever.

    Image representing Hewlett-Packard as depicted...
    Image via CrunchBase
  • AnandTech – Microsoft Provides Windows on ARM Details

    7 by Steven Sinofsky (President of Windows Div...
    Image via Wikipedia

    As reported by Andrew Cunningham for Anandtech: Weve known that Microsoft has been planning an ARM-compatible version of Windows since well before we knew anything else about Windows 8, but the particulars have often been obscured both by unclear signals from Microsoft itself and subsequent coverage of those unclear signals by journalists. Steven Sinofsky has taken to the Building Windows blog today to clear up some of this ambiguity, and in doing so has drawn a clearer line between the version of Windows that will run on ARM, and the version of Windows that will run on x86 processors.

    via AnandTech – Microsoft Provides Windows on ARM Details.

    That’s right ARM cpus are in the news again this time info for the planned version of Windows 8 for the mobile CPU. And it is a separate version of Windows OS not unlike Windows CE or Windows Mobile or Windows Embedded. They are all called Windows, but are very different operating systems. The product will be called Windows on ARM (WOA) and is only just now being tested internally at Microsoft with a substantial development and release to developers still to be announced.

    One upshot of this briefing from Sinofsky was the mobile-centric Metro interface will not be the only desktop available on WOA devices. You will also be able to use the traditional looking Windows desktop and not incur a big battery power performance hit. Which makes it a little more palatable to a wider range of users no doubt who might consider buying a phone or tablet or Ultrabook running an ARM cpu running the new Windows 8 OS. Along the same lines there will be a version of Office apps that will also run on WOA devices including the big three Word, Excel and Powerpoint. These versions will be optimized for mobile devices with touch interfaces which means you should buy the right version of Office for your device (if it doesn’t come pre-installed).

    Lastly the optimization and linking to specially built Windows on ARM devices means you won’t be able to install the OS on just ‘any’ hardware you like. Similar to Windows Mobile, you will need to purchase a device designed for the OS and most likely with a version pre-installed from the factory. This isn’t like a desktop OS built to run on many combos of hardware with random devices installed, it’s going to be much more specific and refined than that. Microsoft wants to really constrain and coordinate the look and feel of the OS on many mobile devices so that an average person can expect it to work similarly and look similar no matter who the manufacturer of the device will be. One engineering choice that is going to assist with this goal is an attempt to address the variations in devices by using so-called “Class Drivers” to support the chipsets and interfaces in a WOA device. This is a less device specific way of support say a display panel, keyboard without having to know every detail. A WOA device will have to be designed and built to a spec provided by Microsoft for which then it will provide a generic ‘class driver’ for that keyboard, display panel, USB 3.0 port, etc. So unlike Apple it won’t just be a limited set of hardware components necessarily, but they will have to meet the specs to be supported by the Windows on ARM OS. This no doubt will make it much easier for Microsoft to keep it’s OS up to date as compared to say in the Google Android universe where the device manufacturers have to provide the OS updates (which in fact is not often as they prefer people to upgrade their device to get the new OS releases).

  • Daring Fireball: Mountain Lion

    Wrestling with Mountain Lion

    And then the reveal: Mac OS X — sorry, OS X — is going on an iOS-esque one-major-update-per-year development schedule. This year’s update is scheduled for release in the summer, and is ready now for a developer preview release. Its name is Mountain Lion.1

    via Daring Fireball: Mountain Lion.

    Mountain Lion is the next iteration of Mac OS X. And while there are some changes since the original Lion was released just this past Summer, they are more like further improvements than real changes. I say this in part due to the concentration on aligning the OS X apps with iOS apps for small things like using the same name:

    iCal versus Calendar

    iChat versus Messages

    Address book versus Contacts

    Reminders versus Notes

    etc.

    Under the facial, superficial level more of the Carbonized libraries and apps are being factored out and being given full Cocoa libraries and app equivalents where possible. But one of the bigger changes, one that’s been slipping since the release of Mac OS X 10.7 is the use of ‘Sand-boxing’ as a security measure for Apps. The sand-box would be implemented by the Developers to adhere to strict rules set forth by Apple. Apps wouldn’t be allowed to do certain things anymore like writing to an external Filesystem, meaning saving or writing out to a USB drive without special privileges being asked for. Seems trivial at first but on the level of a day to day user of a given App it might break it altogether. I’m thinking of iMovie as an example where you can specify you want new Video clips saved into an Event Folder kept on an external hard drive. Will iMovie need to be re-written in order to work on Mountain Lion? Will sand-boxing hurt other Apple iApps as well?

    Then there is the matter of ‘GateKeeper’ which is another OS mechanism to limit trust based on who the developer. Apple will issue security certificates to registered developers who post their software through the App Store, but independents who sell direct can also register for these certs as well, thus establishing a chain of trust from the developer to Apple to the OS X user. From that point you can choose to trust either just App store certified apps, independent developers who are Apple certified or unknown, uncertified apps. Depending on your needs the security level can be chosen according to which type of software you use. some people are big on free software which is the least likely to have a certification, but still may be more trustworthy than even the most ‘certified’ of AppStore software (I’m thinking emacs as an example). So sandboxes, gatekeepers all conspire to funnel developers into the desktop OS and thus make it much harder for developers of malware to infect Apple OS X computers.

    These changes should be fully ready for consumption upon release of the OS in July. But as I mentioned sandboxing has been rolled back no less than two times so far. First roll-back occurred in November. The most recent rollback was here in February. The next target date for sandboxing is in June and should get all the Apple developers to get on board  prior to the release of Mountain Lion the following month, in July. This reminds me a bit of the flexibility Apple had to show in the face of widespread criticism and active resistance to the Final Cut Pro X release last June. Apple had to scramble for a time to address concerns of bugs and stability under Mac OS X 10.7 (the previous Snow Leopard release seemed to work better for some who wrote on Apple support discussion forums). Apple quickly came up with an alternate route for dissatisfied customers who demanded satisfaction by giving copies of Final Cut Pro Studio 7 (with just the Final Cut Pro app included) to people who called up their support lines asking to substitute the older version of the software for a recent purchase of FCP X. Flexibility like this seems to be more frequent going forward which is great to see Apple’s willingness to adapt to an adverse situation of their own creation. We’ll see how this migration goes come July.

    Mac OS X logo
    Image via Wikipedia
  • Buzzword: Augmented Reality

    Augmented Reality in the Classroom Craig Knapp
    Augmented Reality in the Classroom Craig Knapp (Photo credit: caswell_tom)

    What it means. “Augmented reality” sounds very “Star Trek,” but what is it, exactly? In short, AR is defined as “an artificial environment created through the combination of real-world and computer-generated data.”

    via Buzzword: Augmented Reality.

    Nice little survey from the people at Consumer Reports, with specific examples given from the Consumer Electronics Show this past January. Whether it’s software or hardware there’s a lot of things that can be labeled and marketed as ‘Augmented Reality’. On this blog I’ve concentrated more on the apps running on smartphones with integrated cameras, acclerometers and GPS. Those pieces are important building blocks for an integrated Augmented Reality-like experience. But as this article from CR shows, your experience may vary quite a bit.

    In my commentary on stories posted by others on the Internet, I have covered mostly just the examples of AR apps on mobile phones. Specifically I’ve concentrated on the toolkit provided by Layar to add metadata to existing map points of interest. The idea of ‘marking up’ the existing landscape for me holds a great deal of promise as the workload is shifted off the creator of the 3D world to the people traveling within it. The same could hold true for Massively Multiplayer Games and some worlds do allow the members to do that kind of building and marking up of the environment itself. But Layar provides a set of data that you can call up while merely pointing the cell phone camera at a compass direction and then bring up the associated data.

    It’s a sort of hunt for information, sometimes it’s well done if the metadata mark-up is well done. But like many crowd-sourced efforts some amount of lower quality work or worse vandalism occurs. But this should keep anyone from trying to enhance the hidden data that can be discovered through a Layar enhanced Real World. I’m hoping the mobile phone based AR applications grow and find a niche if not a killer app. It’s still early days and mobile phone AR is not being adopted very quickly but I think there’s still a lot of untapped resources there. I don’t think we have discovered all the possible applications of mobile phone AR.

  • ARM Pitches Tri-gate Transistors for 20nm and Beyond

    English: I am the author of this image.
    Image via Wikipedia

    . . . 20 nm may represent an inflection point in which it will be necessary to transition from a metal-oxide semiconductor field-effect transistor MOSFET to Fin-Shaped Field Effect Transistors FinFET or 3D transistors, which Intel refers to as tri-gate designs that are set to debut with the companys 22 nm Ivy Bridge product generation.

    via ARM Pitches Tri-gate Transistors for 20nm and Beyond.

    Three Dimensional transistors in the news again. Previously Intel announced they were adopting a new design for their next generation next smaller design rule for the Ivy Bridge generation Intel CPUs. Now ARM is also doing work to integrate similar technology into their ARM cpu cores as well. No doubt in order to lower Thermal Design Point and maintain clock speed as well are both driving this move to refine and narrow the design rules for the ARM architecture. Knowing Intel is still the top research and development outfit for silicon semi-conductors would give pause to anyone directly competing with them, but ARM is king of the low power semi-conductor and keeping pace with Intel’s design rules is an absolute necessity.

    I don’t know how quickly ARM is going to be able to get a licensee to jump onboard and adopt the new design. Hopefully a large operation like Samsung can take this on and get the chip into it’s design, development, production lines at a chip fabrication facility as soon as possible. Likewise other contract manufacturers like Taiwan Semiconductor Manufacturing Company (TSMC) should also try to get this chip into their facilities quickly too. That way the cell-phone and tablet markets can benefit too as they use a lot of ARM licensed cpu cores and similar intellectual property in their shipping products. And my interest is not so much invested in the competition between Intel and ARM for low power computing but more the overall performance of any single ARM design once it’s been in production for a while and optimized the way Apple designs its custom CPUs using ARM licensed cpu cores. The single most outstanding achievement of Apple in their design and production of the iPad is the battery charge duration of 10 hours. Which to date, is an achievement that has not been beaten, even by other manufacturers and products who also license ARM intellectual property. So if  the ARM design is good and can be validated and proto-typed with useful yields quickly, Apple will no doubt be the first to benefit, and by way of Apple so will the consumer (hopefully).

    Schematic view (L) and SEM view (R) of Intel t...
    Image via Wikipedia
  • More PCI-express SSD cards coming to OS X | MacFixIt – CNET Reviews

    The card will use the Marvell 88SE9455 RAID controller that will interface with the SandForce 2200-based daughter cards that can be added to the main controller on demand. This will allow for user-configurable drive sizes from between 60GB and 2TB in size, allowing you to expand your storage as your need for it increases.

    via More PCI-express SSD cards coming to OS X | MacFixIt – CNET Reviews.

    OWC Logo
    Other World Computing

    I’m a big fan of Other World Computing (OWC) and have always marveled at their ability to create new products they brand on their own. In the article they talk about a new Mac compatible PCIe SSD. It sounds like an uncanny doppleganger to the Angelbird board announced about 2 years ago and started shipping last Fall 2011. The add-on sockets especially remind me of the ugpradable Angelbird board especially. There are not many PCIe SSD cards that have sockets for Flash memory modules and Other World Computing would be the second one I have seen since I’ve been commenting on these devices when they hit the consumer market. Putting sockets on the board makes it easier to come into the market at a lower price point for users where price is most important. However at the high end capacity is king for some purchasers of PCIe SSD drives. So the oddball upgradeable PCIe SSD fills a niche that’s for sure.

    Performance projections for this card are really good and typical of most competing PCIe SSD cards. So depending on your needs you might find this perfect. Price however is always harder to pin down. Angelbird sold a bare PCIe card with no SSDs for around $249. It came with 32GB onboard for that price. What was really nice was the card used SATA sockets set far enough apart to place full sized SSDs on the card without crowding each other. This brought the possibility of slowly upgrading to higher speed drives or larger capacity drives over time to the consumer market.

    Angelbird PCIe SSD
    Welcome to Wings from Angelbird – Mac comaptible PCIe SSD

    But what’s cooler still is Angelbird’s card allowed it to run under ANY OS, even Mac OS as it was engineered to be a a free standing computer with a large Flash memory attached to it. That allowed it to pre-boot into an embedded OS before handing over control to the Host OS whatever flavor it might be. I don’t know if the OWC card works similarly, but it does NOT use SATA sockets or provide enough room to plug in SSD drives. The plug-in modules for this device are mSATA style sockets used in tablets and netbook style computers. So the modules will most likely need to be purchased direct from OWC to peform capacity upgrades over the life of the PCIe card itself. Prices have not yet been set according to this article.

  • AnandTech – AMD Radeon HD 7970 Review: 28nm And Graphics Core Next, Together As One

    Image representing AMD as depicted in CrunchBase
    Image via CrunchBase

    Quick Sync made real-time H.264 encoding practical on even low-power devices, and made GPU encoding redundant at the time. AMD of course isn’t one to sit idle, and they have been hard at work at their own implementation of that technology: the Video Codec Engine VCE.

    via AnandTech – AMD Radeon HD 7970 Review: 28nm And Graphics Core Next, Together As One.

    Intel’s QuickSync helped speed up the realtime encoding of H.264 video. AMD is striking back and has Hybrid Mode VCE operations that will speed things up EVEN MORE! The key to having this hit the market and get widely adopted of course is the compatibility of the software with a wide range of video cards from AMD. The original CUDA software environment from nVidia took a while to disperse into the mainstream as it had a limited number of graphics cards it could support when it rolled out. Now it’s part of the infrastructure and more or less provided gratis whenever you buy ANY nVidia graphics card today. AMD has to follow this semi-forced adoption of this technology as fast as possible to deliver the benefit quickly. At the same time the User Interface to this VCE software had better be a great design and easy to use. Any type of configuration file dependencies and tweaking through preference files should be eliminated to the point where you merely move a slider up and down a scale (Slower->Faster). And that should be it.

    And if need be AMD should commission an encoder App or a plug-in to an open source project like HandBrake to utilize the VCE capability upon detection of the graphics chip on the computer. Make it ‘just happen’ without the tempting early adopter approach of making a tool available and forcing people to ‘build’ a version of an open source encoder to utilize the hardware properly. Hands-off approaches that favor early adopters is going to consign this technology to the margins for a number of years if AMD doesn’t take a more activist role. QuickSync on Intel hasn’t been widely touted either so maybe it’s a moot point to urge anyone to treat their technology as an insanely great offering. But I think there’s definitely brand loyalty that could be brought into play if the performance gains to be had with a discreet graphics card far outpace the integrated graphics solution of QuickSync provided by Intel. If you can achieve a 10x order of magnitude boost, you should be pushing that to all the the potential computer purchasers from this announcement forward.

  • The PC is dead. Why no angry nerds? :: The Future of the Internet — And How to Stop It

    Famously proprietary Microsoft never dared to extract a tax on every piece of software written by others for Windows—perhaps because, in the absence of consistent Internet access in the 1990s through which to manage purchases and licenses, there’d be no realistic way to make it happen.

    via The PC is dead. Why no angry nerds? :: The Future of the Internet — And How to Stop It.

    While true that Microsoft didn’t tax Software Developers who sold product running on the Windows OS, a kind of a tax levy did exist for hardware manufacturers creating desktop pc’s with Intel chips inside. But message received I get the bigger point, cul-de-sacs don’t make good computers. They do however make good appliances. But as the author Jonathan Zittrain points out we are becoming less aware of the distinction between a computer and an applicance, and have lowered our expectation accordingly.

    In fact this points to the bigger trend of not just computers becoming silos of information/entertainment consumption no, not by a long shot. This trend was preceded by the wild popularity of MySpace, followed quickly by Facebook and now Twitter. All platforms as described by their owners with some amount of API publishing and hooks allowed to let in 3rd party developers (like game maker Zynga). But so what if I can play Scrabble or Farmville with my ‘friends’ on a social networking ‘platform’? Am I still getting access to the Internet? Probably not, as you are most likely reading what ever filters into or out of the central all-encompassing data store of the Social Networking Platform.

    Like the old World Maps in the days before Columbus, there be Dragons and the world ends HERE even though platform owners might say otherwise. It is an Intranet pure and simple, a gated community that forces unique identities on all participants. Worse yet it is a big brother-like panopticon where each step and every little movement monitored and tallied. You take quizzes, you like, you share, all these things are collection points, check points to get more data about you. And that is the TAX levied on anyone who voluntarily participates in a social networking platform.

    So long live the Internet, even though it’s frontier, wild-catting days are nearly over. There will be books and movies like How the Cyberspace was Won, and the pioneers will all be noted and revered. We’ll remember when we could go anywhere we wanted and do lots of things we never dreamed. But those days are slipping as new laws get passed under very suspicious pretenses all in the name of Commerce. As for me I much prefer Freedom over Commerce, and you can log that in your stupid little database.

    Cover of "The Future of the Internet--And...
    Cover via Amazon
  • AnandTech – Intel and Micron IMFT Announce Worlds First 128Gb 20nm MLC NAND

    English: NAND Flash memory circuit
    Image via Wikipedia

    The big question is endurance, however we wont see a reduction in write cycles this time around. IMFTs 20nm client-grade compute NAND used in consumer SSDs is designed for 3K – 5K write cycles, identical to its 25nm process.

    via AnandTech – Intel and Micron IMFT Announce Worlds First 128Gb 20nm MLC NAND.

    If true this will help considerably in driving down cost of Flash memory chips while maintaining the current level of wear and performance drop seen over the lifetime of a chip. Stories I have read previously indicated that Flash memory might not continue to evolve using the current generation of silicon chip manufacturing technology. Performance drops occur as memory cells wear out. Memory cells were wearing out faster and faster as the wires and transistors got smaller and narrower on the Flash memory chip.

    The reason for this is memory cells have to be erased in order to free them up and writing and erasing take a toll on the memory cell each time one of these operations is performed. Single Level memory cells are the most robust, and can go through many thousands even millions of write and erase cycles before they wear out. However the cost per megabyte of Single Level memory cells make it an Enterprise level premium price level for Corporate customers generally speaking. Two Level memory cells are much more cost effective, but the structure of the cells makes them less durable than Single Level cells. And as the wires connecting them get thinner and narrower, the amount of write and erase cycles they can endure without failing drops significantly. Enterprise customers in the past would not purchase products specifically because of this limitation of the Two level memory cell.

    As companies like Intel and Samsung tried to make Flash memory chips smaller and less expensive to manufacture, the durability of the chips became less and less. The question everyone asked is there a point of diminishing return where smaller design rules, thinner wires is going to make chips so fragile? The solution for most manufacturers is to add spare memory cells, “over-providing” so that when a cell fails, you can unlock a spare and continue using the whole chip. The over -provisioning no so secret trick has been the way most Solid State Disks (SSDs) have handled the write/erase problem for Two Level memory cells. But even then, the question is how much do you over-provision? Another technique used is called wear-levelling where a memory controller distributes writes/erases over ALL the chips available to it. A statistical scheme is used to make sure each and every chip suffers equally and gets the same number of wear and tear apllied to it. It’s difficult balancing act manufacturers of Flash Memory and storage product manufacturers who consume those chips to make products that perform adequately, do not fail unexpectedly and do not cost too much for laptop and desktop manufacturers to offer to their customers.

    If Intel and Micron can successfully address the fragility of Flash chips as the wiring and design rules get smaller and smaller, we will start to see larger memories included in more mobile devices. I predict you will see iPhones and Samsung Android smartphones with upwards of 128GBytes of Flash memory storage. Similarly, tablets and ultra-mobile laptops will also start to have larger and larger SSDs available. Costs should stay about where they are now in comparison to current shipping products. We’ll just have more products to choose from, say like 1TByte SSDs instead of the more typical high end 512GByte SSDs we see today. Prices might also come down, but that’s bound to take a little longer until all the other Flash memory manufacturers catch up.

    A flash memory cell.
    Image via Wikipedia: Wiring of a Flash Memory Cell
  • Samsung: 2 GHz Cortex-A15 Exynos 5250 Chip

    Samsung also previewed a 2 GHz dual-core ARM Cortex-A15 application processor, the Exynos 5250, also designed on its 32-nm process. The company said that the processor is twice as fast as a 1.5 GHz A9 design without having to jump to a quad-core layout.

    via Samsung Reveals 2 GHz Cortex-A15 Exynos 5250 Chip.

    Deutsch: Offizielles Logo der ARM-Prozessorarc...
    Image via Wikipedia

    More news on the release dates and the details off Samsung’s version of the ARM Cortex A15 cpu for mobile devices. Samsung is helping ramp up performance by shrinking the design rule down to 32nm, and in the  A15 cpu dropping two out of the four possible cores. This choice is to make room for the integrated graphics processor. It’s a deluxe system on a chip that will no doubt give any A9 equipped tablet a run for its money. Indications at this point by Samsung are that the A15 will be a tablet only cpu and not adapted to smartphone use.

    Early in the Fall there were some indications that the memory addressing of the Cortex A15 would be enhanced to allow larger memories (greater than 4GBytes) to be added to devices. As it is now memory addressing isn’t a big issue as memory extensions (up to 40bits Large Physical  Address Extensions-LPAE) are allowed under the current generation Cortex A9. However the Instructions are still the same 32 bit Instruction Set longtime users of the ARM architecture are familiar with, and as always are backward compatible with previous generation software. It would appear that the biggest advantage to moving to Cortex A15 would be the potential for higher clock rates, decent power management and room to grow on the die for embedded graphics.

    Apple in it’s designs using the Cortex processors has stayed one generation behind the rest of the manufacturers and used all possible knowledge and brute force to eek out a little more power savings. Witness the iPad battery life still tops most other devices on the market. By creating a fully customized Cortex A8, Apple has absolutely set the bar on power management on die, and on the motherboard as well. If Samsung decides to go the route of pure power and clock, but sacrifices two cores to get the power level down I just hope they can justify that effort with equally amazing advancements in the software that runs on this new chip. Whether it be a game or better yet a snazzy User Interface, they need to differentiate themselves and try to show off their new cpu.