Category: technology

General technology, not anything in particular

  • Flash DOOMED to drive itself off a cliff – boffins • The Register

    A flash memory cell.
    Image via Wikipedia

    Microsoft and University of California San Diego researchers have said flash has a bleak future because smaller and more densely packed circuits on the chips silicon will make it too slow and unreliable. Enterprise flash cost/bit will stagnate and the cutting edge that is flash will become a blunted blade.

    via Flash DOOMED to drive itself off a cliff – boffins • The Register. As reported bChris Mellor for The Register (http://www.theregister.co.uk/)

    More information regarding semiconductor manufacturers rumors and speculation of a wall being hit in the shrinking down of Flash memory chips. (see this link to the previous Carpetbomber article from Dec. 15). This report has a more definitive ring to it as actual data has been collected and projections based on models of that data. The trend according to these researchers is lower performance due to increasingly bad error rates and signaling on the chip itself. Higher Density chips = Lower Performance per memory cell.

    To hedge against this dark future for NAND flash memory companies are attempting to develop novel and in cases exotic technology. IBM has “racetrack memory“, Hewlett-Packard and Hynix have MemRistor and the list goes on. Nobody in the industry has any idea what comes next so bets are being placed all over the map. My advice to anyone reading this article is do not choose a winner until it has won. I say this as someone who has watched a number of technologies fight for supremacy in the market. Sony Betamax versus JVC VHS, HD-DVD versus Blu-ray, LCD versus Plasma Display Panel, etc. I will admit at times the time span for these battles can be waged over a longer period of time, and so it can be harder to tell who has won. But it seems to be shorter time spans over the life of these products as more recent battles have been waged. And who is to say, Blu-ray hasn’t really been adopted widely enough to say it is the be all and end all as DVD and CD disks both are used widely as recordable media. Just know that to go any further in improving the cost vs. performance ratio NAND will need to be forsaken to get to the next technological benchmark in high speed, random access, long term, durable storage media.

    Things to look out for as the NAND bandwagon slows down are Triple Level Memory cells, or worse yet Quadruple Level cells. These are not going to be the big saviors the average consumer hopes they will be. Performance of Flash memory that gangs up the memory cells also has higher error rates at the beginning and even higher over time. The amount of cells assigned for being ‘over-provisioned’ will be so high as to negate the cost benefit of choosing the higher density memory cells. Also being touted as a way forward to stave off the end of the road are error correcting circuits and digital signal processors onboard the chips and controllers. As the age of the chip begins to affect its reliability, more statistical quality control techniques are applied to offset the losses of signal quality in the chip. This is a technique used today by at least one manufacturer (Intel), but how widely it can be adopted and how successfully is another question altogether. It would seem each memory manufacturer has its own culture and as a result, a technique for fixing the problem. Who ever has the best marketing and sales campaigns will as past history has shown will be the winner.

    English: I, § Music Sorter § (talk) ...
    Image via Wikipedia
  • I don’t know how accurate or specific this criticism of Apple’s Press conference from Wednesday is, but many people are commenting on it. I contributed a comment as well.

  • Facebook Shakes Hardware World With Own Storage Gear | Wired Enterprise | Wired.com

    Image representing Facebook as depicted in Cru...
    Image via CrunchBase

    Now, Facebook has provided a new option for these big name Wall Street outfits. But Krey also says that even among traditional companies who can probably benefit from this new breed of hardware, the project isn’t always met with open arms. “These guys have done things the same way for a long time,” he tells Wired.

    via Facebook Shakes Hardware World With Own Storage Gear | Wired Enterprise | Wired.com.

    Interesting article further telling the story of Facebook’s Open Compute project. This part of the story concentrates on the mass storage needs of the social media company. Which means Wall Street data center designer/builders aren’t as enthusiastic about Open Compute as one might think. The old school Wall Streeters have been doing things the same way as Peter Krey says for a very long time. But that gets to the heart of the issue, what the members of the Open Compute project hope to accomplish. Rack Space AND Goldman Sachs are members, both contributing and getting pointers from one another. Rack Space is even beginning to virtualize equipment down to the functional level replacing motherboards with a Virtual I/O service. That would allow components to be ganged up together based on the frequency of their replacement and maintenance. According to the article, CPUs could be in one rack cabinet, DRAM in another, Disks in yet another (which is already the case now with storage area networks).

    The newest item to come into the Open Compute circus tent is storage. Up until now that’s been left to Value Added Resellers (VARs) to provide. So different brand loyalties and technologies still hold sway for many Data Center shops including Open Compute. Now Facebook is redesigning the disk storage rack to create a totally tool-less design. No screws, no drive carriers, just a drive and a latch and that is it. I looked further into this tool-less phenomenon and found an interesting video at HP

    HP Z-1 all in one CAD workstation

    Along with this professional video touting how easy it is to upgrade this all in one design:

    The Making of the HP Z1

    Having recently purchased a similarly sized iMac 27″ and upgrading it by adding a single SSD drive into the case, I can tell you this HP Z1 demonstrates in every way possible the miracle of toolless designs. I was bowled over and remember back to some of my memories of different Dell tower designs over the years (some with more toolless awareness than others). If a toolless future is inevitable I say bring it on. And if Facebook ushers in the era of toolless Storage Racks as a central design tenet of Open Compute so much the better.

    Image representing Goldman Sachs as depicted i...
    Image via CrunchBase
  • AnandTech – Microsoft Provides Windows on ARM Details

    7 by Steven Sinofsky (President of Windows Div...
    Image via Wikipedia

    As reported by Andrew Cunningham for Anandtech: Weve known that Microsoft has been planning an ARM-compatible version of Windows since well before we knew anything else about Windows 8, but the particulars have often been obscured both by unclear signals from Microsoft itself and subsequent coverage of those unclear signals by journalists. Steven Sinofsky has taken to the Building Windows blog today to clear up some of this ambiguity, and in doing so has drawn a clearer line between the version of Windows that will run on ARM, and the version of Windows that will run on x86 processors.

    via AnandTech – Microsoft Provides Windows on ARM Details.

    That’s right ARM cpus are in the news again this time info for the planned version of Windows 8 for the mobile CPU. And it is a separate version of Windows OS not unlike Windows CE or Windows Mobile or Windows Embedded. They are all called Windows, but are very different operating systems. The product will be called Windows on ARM (WOA) and is only just now being tested internally at Microsoft with a substantial development and release to developers still to be announced.

    One upshot of this briefing from Sinofsky was the mobile-centric Metro interface will not be the only desktop available on WOA devices. You will also be able to use the traditional looking Windows desktop and not incur a big battery power performance hit. Which makes it a little more palatable to a wider range of users no doubt who might consider buying a phone or tablet or Ultrabook running an ARM cpu running the new Windows 8 OS. Along the same lines there will be a version of Office apps that will also run on WOA devices including the big three Word, Excel and Powerpoint. These versions will be optimized for mobile devices with touch interfaces which means you should buy the right version of Office for your device (if it doesn’t come pre-installed).

    Lastly the optimization and linking to specially built Windows on ARM devices means you won’t be able to install the OS on just ‘any’ hardware you like. Similar to Windows Mobile, you will need to purchase a device designed for the OS and most likely with a version pre-installed from the factory. This isn’t like a desktop OS built to run on many combos of hardware with random devices installed, it’s going to be much more specific and refined than that. Microsoft wants to really constrain and coordinate the look and feel of the OS on many mobile devices so that an average person can expect it to work similarly and look similar no matter who the manufacturer of the device will be. One engineering choice that is going to assist with this goal is an attempt to address the variations in devices by using so-called “Class Drivers” to support the chipsets and interfaces in a WOA device. This is a less device specific way of support say a display panel, keyboard without having to know every detail. A WOA device will have to be designed and built to a spec provided by Microsoft for which then it will provide a generic ‘class driver’ for that keyboard, display panel, USB 3.0 port, etc. So unlike Apple it won’t just be a limited set of hardware components necessarily, but they will have to meet the specs to be supported by the Windows on ARM OS. This no doubt will make it much easier for Microsoft to keep it’s OS up to date as compared to say in the Google Android universe where the device manufacturers have to provide the OS updates (which in fact is not often as they prefer people to upgrade their device to get the new OS releases).

  • Daring Fireball: Mountain Lion

    Wrestling with Mountain Lion

    And then the reveal: Mac OS X — sorry, OS X — is going on an iOS-esque one-major-update-per-year development schedule. This year’s update is scheduled for release in the summer, and is ready now for a developer preview release. Its name is Mountain Lion.1

    via Daring Fireball: Mountain Lion.

    Mountain Lion is the next iteration of Mac OS X. And while there are some changes since the original Lion was released just this past Summer, they are more like further improvements than real changes. I say this in part due to the concentration on aligning the OS X apps with iOS apps for small things like using the same name:

    iCal versus Calendar

    iChat versus Messages

    Address book versus Contacts

    Reminders versus Notes

    etc.

    Under the facial, superficial level more of the Carbonized libraries and apps are being factored out and being given full Cocoa libraries and app equivalents where possible. But one of the bigger changes, one that’s been slipping since the release of Mac OS X 10.7 is the use of ‘Sand-boxing’ as a security measure for Apps. The sand-box would be implemented by the Developers to adhere to strict rules set forth by Apple. Apps wouldn’t be allowed to do certain things anymore like writing to an external Filesystem, meaning saving or writing out to a USB drive without special privileges being asked for. Seems trivial at first but on the level of a day to day user of a given App it might break it altogether. I’m thinking of iMovie as an example where you can specify you want new Video clips saved into an Event Folder kept on an external hard drive. Will iMovie need to be re-written in order to work on Mountain Lion? Will sand-boxing hurt other Apple iApps as well?

    Then there is the matter of ‘GateKeeper’ which is another OS mechanism to limit trust based on who the developer. Apple will issue security certificates to registered developers who post their software through the App Store, but independents who sell direct can also register for these certs as well, thus establishing a chain of trust from the developer to Apple to the OS X user. From that point you can choose to trust either just App store certified apps, independent developers who are Apple certified or unknown, uncertified apps. Depending on your needs the security level can be chosen according to which type of software you use. some people are big on free software which is the least likely to have a certification, but still may be more trustworthy than even the most ‘certified’ of AppStore software (I’m thinking emacs as an example). So sandboxes, gatekeepers all conspire to funnel developers into the desktop OS and thus make it much harder for developers of malware to infect Apple OS X computers.

    These changes should be fully ready for consumption upon release of the OS in July. But as I mentioned sandboxing has been rolled back no less than two times so far. First roll-back occurred in November. The most recent rollback was here in February. The next target date for sandboxing is in June and should get all the Apple developers to get on board  prior to the release of Mountain Lion the following month, in July. This reminds me a bit of the flexibility Apple had to show in the face of widespread criticism and active resistance to the Final Cut Pro X release last June. Apple had to scramble for a time to address concerns of bugs and stability under Mac OS X 10.7 (the previous Snow Leopard release seemed to work better for some who wrote on Apple support discussion forums). Apple quickly came up with an alternate route for dissatisfied customers who demanded satisfaction by giving copies of Final Cut Pro Studio 7 (with just the Final Cut Pro app included) to people who called up their support lines asking to substitute the older version of the software for a recent purchase of FCP X. Flexibility like this seems to be more frequent going forward which is great to see Apple’s willingness to adapt to an adverse situation of their own creation. We’ll see how this migration goes come July.

    Mac OS X logo
    Image via Wikipedia
  • Maxeler FPGA Project

    Great posting by Lucas Szyrmer @ softwaretrading.co.uk, it’s a nice summary of the story from last month about JP Morgan Chase’s use of FPGAs to speed up some of their analysis for risk. And it goes into greater detail concerning the mechanics of how to translate what one has to do in software across the divide into something that can be turned in VHDL/Verilog and written into the FPGA itself. It is in a word, a ‘non-trivial’ task, and can take quite a long time to get working.

    softwaretrading's avatarSoftware Trading


    Lately, I’ve been exploring a little known corner of high performance computing (HPC) known as FPGAs. Turns out, it’s time to get electrical on yowass (Pulp Fiction reference intentional). You can program these chips in the field, thus speeding up processing speeds dramatically, relative to generic CPUs. It’s possible to customize functionality to very specific needs.

    Why this works

    The main benefit of FPGAs comes from reorganizing calculations. FPGAs work on a massively parallel basis. You get rid of bottlenecks in typical CPU design. While these bottlenecks are good for general purpose applications, like watching Pulp Fiction, they significantly slow down the amount of calculations that you do per second. In addition to being massively multi-parallel, FPGAs also are faster, according to FPGAdeveloper, because:

    • you aren’t competing with your operating system or applications like anti-virus for CPU cycle time
    • you run at a lower level than the OS, so you doing have…

    View original post 427 more words

  • Buzzword: Augmented Reality

    Augmented Reality in the Classroom Craig Knapp
    Augmented Reality in the Classroom Craig Knapp (Photo credit: caswell_tom)

    What it means. “Augmented reality” sounds very “Star Trek,” but what is it, exactly? In short, AR is defined as “an artificial environment created through the combination of real-world and computer-generated data.”

    via Buzzword: Augmented Reality.

    Nice little survey from the people at Consumer Reports, with specific examples given from the Consumer Electronics Show this past January. Whether it’s software or hardware there’s a lot of things that can be labeled and marketed as ‘Augmented Reality’. On this blog I’ve concentrated more on the apps running on smartphones with integrated cameras, acclerometers and GPS. Those pieces are important building blocks for an integrated Augmented Reality-like experience. But as this article from CR shows, your experience may vary quite a bit.

    In my commentary on stories posted by others on the Internet, I have covered mostly just the examples of AR apps on mobile phones. Specifically I’ve concentrated on the toolkit provided by Layar to add metadata to existing map points of interest. The idea of ‘marking up’ the existing landscape for me holds a great deal of promise as the workload is shifted off the creator of the 3D world to the people traveling within it. The same could hold true for Massively Multiplayer Games and some worlds do allow the members to do that kind of building and marking up of the environment itself. But Layar provides a set of data that you can call up while merely pointing the cell phone camera at a compass direction and then bring up the associated data.

    It’s a sort of hunt for information, sometimes it’s well done if the metadata mark-up is well done. But like many crowd-sourced efforts some amount of lower quality work or worse vandalism occurs. But this should keep anyone from trying to enhance the hidden data that can be discovered through a Layar enhanced Real World. I’m hoping the mobile phone based AR applications grow and find a niche if not a killer app. It’s still early days and mobile phone AR is not being adopted very quickly but I think there’s still a lot of untapped resources there. I don’t think we have discovered all the possible applications of mobile phone AR.

  • SeaMicro adds Xeons to Atom smasher microservers • The Register

    Theres some interesting future possibilities for the SeaMicro machines. First, SeaMicro could extend that torus interconnect to span multiple chassis. Second, it could put a “Patsburg” C600 chipset on an auxiliary card and actually make fatter SMP nodes out of single processor cards and then link them into the torus interconnect. Finally, it could of course add other processors to the boards, such as Tileras 64-bit Tile Gx3000s or 64-bit ARM processors when they become available.

    via SeaMicro adds Xeons to Atom smasher microservers • The Register.

    SeaMicro SM10000
    SeaMicro SM10000 (Photo credit: blogeee.net)

    Timothy Prickett Morgan writing for The Register, has a great article on SeaMicro’s recent announcement of a Xeon-based 10U server chassis. Seemingly going against it’s first two generations of low power massively parallel server boxes, this one uses a brawny Intel Xeon server chip (albeit one that is fairly low power and low Thermal Design Point).

    Sad as it may seem to me, the popularity of the low power, massively parallel cpu box must not be very lucrative. But a true testament to the flexibility of their original 10U server rack design is the ability to do a ‘swap’ of the higher power Intel Xeon cpus. I doubt there’s too many competitors in this section of the market that could ‘turn on a dime’ the way SeaMicro has appeared to do with this Xeon based server. Most often designs will be so heavily optimized for a particular cpu, power supply and form factor layout that changing one component might force a bigger change order in the design department. And the product would take longer to develop and ship as a result.

    So even though I hope the 64bit Intel Atom will still be SeaMicro’s flagship product, I’m also glad they can stay in the fight longer selling into the ‘established’ older data center accounts worldwide. Adapt or die is the cliche adage of some technology writers and I would mark this with a plus (+) in the adapt column.

  • Tilera preps many-cored Gx chips for March launch • The Register

    “Were here today shipping a 64-bit processor core and we are what looks like two years ahead of ARM,” says Bishara. “The architecture of the Tile-Gx is aligned to the workload and gives one server node per chip rather than a sea of wimpy nodes not acting in a cache coherent manner. We have been in this market for two years now and we know what hurts in data centers and what works. And 32-bit ARM just is not going to cut it. Applied Micro is doing their own core, and that adds a lot of risks.”

    via Tilera preps many-cored Gx chips for March launch • The Register.

    Tile of a TILE64 processor from Tilera
    Image via Wikipedia

    Tilera is preparing to ship a 36 core Tile-Gx cpu in March. It’s going to be packaged with a re-compiled Linux distribution of CentOS on a development board (TILEencore). It will also have a number of re-compiled Unix utilities and packages included, so OEM shops can begin product development as soon as possible.

    I’m glad to see Tilera is still duking it out, battling for the design wins with manufacturers selling into the Data Center as it were. Larger Memory addressing will help make the Tilera chips more competitive with Commodity Intel Hardware data center shops who build their own hardware. Maybe we’ll see full 64bit memory extensions at some point as a follow on to the current 40bit address space extensions currently. The memory extensions are necessary to address more than the 32bit limit of 4GBytes, so an extra 8 bits goes a long, long way to competing against a fully 64bit address space.

    Also considering work being done at ARM for optimizing their chip designs for narrower design rules, Tilera should follow suit and attempt to shrink their chip architecture too. This would allow clock speeds to ease upward and keep the thermal design point consistent with previous generation Tile architecture chips, making Tile-Gx more competitive in the coming years. ARM announced 1 month ago they will be developing a 22nm sized cpu core for future licensing by ARM customers. As it is now Tilera uses an older fabrication design rule of around 40nm (which is still quite good given the expense required to shrink to lower design rules). And they have plans to eventually migrate to a narrower design rule. However ideally they would not stay farther behind that 1 generation from the top-end process lines of Intel (who is targeting 14nm production lines in the near future).

  • ARM Pitches Tri-gate Transistors for 20nm and Beyond

    English: I am the author of this image.
    Image via Wikipedia

    . . . 20 nm may represent an inflection point in which it will be necessary to transition from a metal-oxide semiconductor field-effect transistor MOSFET to Fin-Shaped Field Effect Transistors FinFET or 3D transistors, which Intel refers to as tri-gate designs that are set to debut with the companys 22 nm Ivy Bridge product generation.

    via ARM Pitches Tri-gate Transistors for 20nm and Beyond.

    Three Dimensional transistors in the news again. Previously Intel announced they were adopting a new design for their next generation next smaller design rule for the Ivy Bridge generation Intel CPUs. Now ARM is also doing work to integrate similar technology into their ARM cpu cores as well. No doubt in order to lower Thermal Design Point and maintain clock speed as well are both driving this move to refine and narrow the design rules for the ARM architecture. Knowing Intel is still the top research and development outfit for silicon semi-conductors would give pause to anyone directly competing with them, but ARM is king of the low power semi-conductor and keeping pace with Intel’s design rules is an absolute necessity.

    I don’t know how quickly ARM is going to be able to get a licensee to jump onboard and adopt the new design. Hopefully a large operation like Samsung can take this on and get the chip into it’s design, development, production lines at a chip fabrication facility as soon as possible. Likewise other contract manufacturers like Taiwan Semiconductor Manufacturing Company (TSMC) should also try to get this chip into their facilities quickly too. That way the cell-phone and tablet markets can benefit too as they use a lot of ARM licensed cpu cores and similar intellectual property in their shipping products. And my interest is not so much invested in the competition between Intel and ARM for low power computing but more the overall performance of any single ARM design once it’s been in production for a while and optimized the way Apple designs its custom CPUs using ARM licensed cpu cores. The single most outstanding achievement of Apple in their design and production of the iPad is the battery charge duration of 10 hours. Which to date, is an achievement that has not been beaten, even by other manufacturers and products who also license ARM intellectual property. So if  the ARM design is good and can be validated and proto-typed with useful yields quickly, Apple will no doubt be the first to benefit, and by way of Apple so will the consumer (hopefully).

    Schematic view (L) and SEM view (R) of Intel t...
    Image via Wikipedia