Category: blogroll

This is what I subscribe to myself

  • iPod classic stock dwindling at Apple

    iPod classic
    Image by Freimut via Flickr

    Apple could potentially upgrade the Classic to use a recent 220GB Toshiba drive, sized at the 1.8 inches the player would need.

    via iPod classic stock dwindling at Apple, other retailers | iPodNN.

    Interesting indeed, it appears Apple is letting supplies run low for the iPod Classic. No word immediately as to why but there could be a number of reasons as speculated in this article. Most technology news websites understand the divide between the iPhone/Touch operating system and all the old legacy iPod devices (an embedded OS that only runs the device itself). Apple would like to consolidate its consumer products development efforts by slowly winnowing out non-iOS based ipods. However, due to the hardware requirements demanded by iOS, Apple will be hard pressed to jam such a full featured bit of software into iPod nano and iPod shuffles. So whither the old click wheel interface iPod empire?

  • Toshiba rolls out 220GB-Could the iPod Classic see a refresh?

    first generation iPod
    Image via Wikipedia

    Toshiba Storage Device Division has introduced its MKxx39GS series 1.8-inch spinning platter drives with SATA connectors.

    via Toshiba rolls out 220GB, extra-compact 1.8-inch hard drive | Electronista.

    Seeing this announcement reminded me a little of the old IBM Microdrive. A 1.8″ wide spinning disk that fit into a Compact Flash sized form factor (roughly 1.8″ square). Those drives were at the time 340MB and astoundingly dense storage format that digital photographs gravitated to very quickly. Eventually this Microdrive was improved up to around 1GByte per drive in the same small form factor. Eventually the market for this storage dried up as smaller and smaller cameras became available with larger and larger amounts of internal storage and slots for removable storage like Sony’s Memory stick format or the SD Card format. The Microdrive was also impeded by a very high cost per MByte versus other available storage by the end of its useful lifespan.

    But no one knows what new innovative products might hit the market. Laptop manufacturers continued to improve on their expansion bus known as PCMCIA, PC Card and eventually Card Bus. The idea was you could plug any kind of device you wanted into that expansion bus connect to a a dial-up network, a wired Ethernet network or a Wireless network. Card Bus was 32-bit clean and designed to be as close to the desktop PCI expansion bus as possible. Folks like Toshiba were making small hard drives that would fit the tiny dimensions of that slot, containing all the drive electronics within the Card Bus card itself. Storage size improved as the hard drive market itself improved the density of it’s larger 2.5″ and 3.5″ desktop hard drive product.

    I remember the first 5GByte Card Bus hard drive and marveling at how far folks at Toshiba and Samsung had outdistanced IBM. Followed soon after by the 10GByte drive. However just as we wondered how cool this was, Apple created a copy of a product being popularized by a company named Rio. It was a new kind of hand held music player that primarily could play back audio .mp3 files. It could hold 5GBytes of music (compared to 128MBytes and 512MBytes for most top of the line Rio products at the time). It had a slick, and very easy to navigate interface with a spinning wheel you could click down on with the thumb of your hand. Yes it was the first generation iPod and it demanded a large quantity of those little bitty hard drives Samsung and Toshiba were bringing to market.

    Each year storage density would increase and a new generation of drives would arrive. Each year a new iPod would hit the market taking advantage of the new hard drives. The numbers seemed to double very quickly. 20Gig, 30Gig-the first ‘video’ capable iPod, 40Gig,60gig,120gig and finally today the iPod Classic at a whopping 160GBytes of storage! And then the great freeze, the slowdown and transition to Flash memory based iPods which were mechanically solid state. No moving parts, no chance for mechanical failure, no loss of data and speeds unmatched by any hard drive of any size currently on the market. The Flash storage transition also meant lower power requirements, longer battery time and now for the first time the real option of marrying a cell phone with your iPod (I do know there was an abortive attempt to do this on a smaller scale with Motorola phones @ Cingular). The first two options were 4GB and 8GB iPhones using the solid state flash memory. So wither the iPod classic?

    iPod Classic is still on the market for those wishing to pay slightly less than the price for an iPod touch. You get much larger amount of total storage (video and audio both) but things have stayed put at 160GBytes for a very long time now. Manufacturers like Toshiba haven’t come out with any new product seeing the end in sight for the small 1.8″ hard drive. Samsung dropped it’s 1.8″ hard drives altogether seeing where Apple was going with its product plan. So I’m both surprised and slightly happy to see Toshiba soldier onward bringing out a new product. I’m thinking Apple should really do a product refresh on the iPod classic. They could also add iOS as a means of up-scaling and up-marketing the device to people who cannot afford the iPod Touch, leaving the price right where it is today.

  • Chip upstart Tilera in the news

    Diagram Of A Partial Mesh Network
    Diagram Of A Partial Mesh Network

    As of early 2010, Tilera had over 50 design wins for the use of its SoCs in future networking and security appliances, which was followed up by two server wins with Quanta and SGI. The company has had a dozen more design wins since then and now claims to have over 150 customers who have bought prototypes for testing or chips to put into products.

    via Chip upstart Tilera lines up $45m in funding • The Register.

    There’s not been a lot of news about Tilera most recently, but they are still selling products, raising funds through private investments. Their product road map is showing great promise as well. I want to see more of their shipping product get tested in the online technology website arena. I don’t care if Infoworld, Network World, Tom’s Hardware or Anandtech does it. Whether it’s security devices or actual multi-core servers it would be cool to see Tilera compared even if it was an apples and oranges type of test. On paper it appears the mesh network of Tilera’s multi-core cpus is designed to set it apart from any other product currently available on the market. Similarly the ease of accessing the cores through the mesh network is meant to make the use of a single system image much easier as it is distributed across all the cores almost invisibly. In a word Tilera and its next closest competitor SeaMicro are cloud computing in a single solitary box.

    Cloud computing for those who don’t know is an attempt to create a utility like the water system or electrical system in the town where you live. The utility has excess capacity, and what it doesn’t use it sells off to connected utility systems. So you always will have enough power to cover your immediate needs with a little in reserve for emergencies. On the days where people don’t use as much electricity you cut back on production a little or sell off the excess to someone who needs it. Now imagine that electricity is computer cycles doing additions, subtractions or longer form mathematical analysis all in parallel and scaling out to extra computer cores as needed depending on the workload. Amazon has a service they sell like this already, Microsoft too. You sign up to use their ‘compute cloud’ and load your applications, your data and just start crunching away while the meter runs. You get billed based on how much of the computing resource you used.

    Nowadays, unfortunately, in data centers you got single purpose servers doing one thing, sitting idle most of the time. This has been a going concern so much so that a whole industry has cropped up of splitting those machines into thinner slices with software like VMWare. Those little slivers of a real computer then take up all the idle time of that once single purpose machine and occupy a lot more of its resources. But you still have that full-sized, hog of an old desktop tower now sitting in a 19 inch rack, generating heat and sucking up too much power. Now it’s time to scale down the computer again and that’s where Tilera comes in with it’s multi-core, low power, mesh-networked cpus. And investment partners are rolling in as a result of the promise for this new approach!

    Numerous potential customers, venture capital outfits, and even fabrication partners are jumping in to provide a round of funding that wasn’t even really being solicited by the company. Tilera just had people falling all over themselves writing checks to get a piece of the pie before things take off. It’s a good sign in these stagnant times for startup companies. And hopefully this will buy more time for the roadmap to future cpus from the company hopefully scaling up to the 200 core cpu that would be peak achievement in this quest for high performance, low-power computing.

  • The Sandy Bridge Review: Intel Core i7-2600K – AnandTech

    Quick Sync is just awesome. Its simply the best way to get videos onto your smartphone or tablet. Not only do you get most if not all of the quality of a software based transcode, you get performance thats better than what high-end discrete GPUs are able to offer. If you do a lot of video transcoding onto portable devices, Sandy Bridge will be worth the upgrade for Quick Sync alone.

    For everyone else, Sandy Bridge is easily a no brainer. Unless you already have a high-end Core i7, this is what youll want to upgrade to.

    via The Sandy Bridge Review: Intel Core i7-2600K, i5-2500K and Core i3-2100 Tested – AnandTech :: Your Source for Hardware Analysis and News.

    Previously in this blog I have recounted stories from Tom’s Hardware and Anandtech.com surrounding the wicked cool idea of tapping the vast resources contained within your GPU while you’re not playing video games. Producers of GPUs like nVidia and AMD both wanted to market their products to people who not only gamed but occasionally ripped video from DVDs and played them back on ipods or other mobile devices. The amount of time sunk into doing these kinds of conversions were made somewhat less of a pain due to the ability to run the process on a dual core Wintel computer, browsing web pages  while re-encoding the video in the background. But to get better speeds one almost always needs to monopolize all the cores on the machine and free software like HandBrake and others will take advantage of those extra cores, thus slowing your machine, but effectively speeding up the transcoding process. There was hope that GPUs could accelerate the transcoding process beyond what was achievable with a multi-core cpu from Intel. An example is also Apple’s widespread adoption of OpenCL as a pipeline to the GPU to send rendering requests for any video frames or video processing that may need to be done in iTunes, QuickTime or the iLife applications. And where I work, we get asked to do a lot of transcoding of video to different formats for customers. Usually someone wants a rip from a DVD that they can put on a flash drive and take with them into a classroom.

    However, now it appears there is a revolution in speed in the works where Intel is giving you faster transcodes for free. I’m talking about Intel’s new Quick Sync technology using the integrated graphics core as a video transcode accelerator. The speeds of transcoding are amazingly fast and given the speed, trivial to do for anyone including the casual user. In the past everyone seemed to complain about how slow their computer was especially for ripping DVDs or transcoding the rips to smaller more portable formats. Now, it takes a few minutes to get an hour of video into the right format. No more blue Monday. Follow the link to the story and analysis from Anandtech.com as they ran head to head comparisons of all the available techniques of re-encoding/transcoding a Blue-ray video release into a smaller .mp4 file encoded in as h.264. They did comparisons of Intel four-core cpus (which took the longest and got pretty good quality) versus GPU accelerated transcodes, versus the new Intel QuickSync technology coming out soon on the Sandy Bridge gen Intel i7 cpus. It is wicked cool how fast these transcodes are and it will make the process of transcoding trivial compared to how long it takes to actually ‘watch’ the video you spent all that time converting.

    Links to older GPU accelerated video articles:

    https://carpetbomberz.com/2008/06/25/gpu-accelerated-h264-encoding/
    https://carpetbomberz.com/2009/06/12/anandtech-avivo/
    https://carpetbomberz.com/2009/06/23/vreveal-gpu/
    https://carpetbomberz.com/2010/10/18/microsoft-gpu-video-encoding-patent/

  • 450mm chip wafers | Electronista

    Image:Wafer 2 Zoll bis 8 Zoll.jpg uploaded by ...
    Image via Wikipedia

    Intel factory to make first 450mm chip wafers | Electronista.

    Being a student of the history of technology I know that the silicon semiconductor industry has been able to scale production according to Moore’s Law. However apart from the advances in how small the transistors can be made (the real basis of Moore’s Law), the other scaling factor has been the size of the wafers. Back in the old days silicon crystals had to be drawn out from a furnace at a very even steady rate which forced them to be thin cylinders 1-2″ in diameter. However as techniques improved (including a neat trick where the crystal was re-melted to purify it) the crystals increase in diameter to a nice 4″ size that helped bring down costs. Then came the big migration to 6″ wafers, 8″ and now the 300mm wafer (roughly 12″). Now Intel is still on its freight train to further bring down costs by moving the wafers up to the next largest size (450mm) and is stilling shrinking the parts (down to an unbelievably skinny 22nm in size). As the wafers continue to grow, the cost of processing equipment goes up and the cost of the whole production facility will too. The last big price point for a new production fab for Intel was always $2Billion. There may be multiple production lines in that Fab, but you needed to always have upfront that required money in order to be competitive. And Intel was more than competitive, it could put 3 lines into production in 3 years (blowing the competition out of the water for a while) and make things very difficult in the industry.

    Where things will really shake up is in the Flash memory production lines. The size of the design rulings for current flash memory chips at Intel is right around 22nm. Intel and Samsung both are trying to shrink down the feature sizes of all the circuits on their Single and Multi-Level Flash memory chips. Add to this the stacking of chips into super sandwiches and you find they can glue together 8 of their 8Gbyte chips, making for a single very thin 64Gbyte memory chip. This chip is then mated up to a memory controller and voila, the iPhone suddenly hits 64Gbytes of storage for all your apps and mp4’s from iTunes. Similarly on the hard drive end of the scale things will also wildly improve. Solid State Disk capacities should creep upwards further (beyond the top of the line 512Gbyte SSDs) as will the PCI Express based storage devices (probably doubling in capacity to 2 TeraBytes) after 450mm wafers take hold across the semiconductor industry. So it’s going to be a big deal if Chinese, Japanese and American companies get on the large silicon wafer bandwagon.

  • Bob Slydell: What would you say, Ya DO Here?

    I watched this video for the second time the  day before Thanksgiving. I agree with it completely, not to make fun of where I work. I will admit in the past I have been juvenile and overly idealistic in my beliefs about the role of managers. We all want to do well I think, but still there’s more than a little truth to the criticism that people get more done outside regular office hours. I used to stay long after our office would close to get all the work done I had put off during the day simply because I didn’t want to lose my place or be interrupted. It really does call into question the logic of The Office, and what is being accomplished in an 8 hour interval each day. And it also calls into question what Work really is. I always just assumed the interruptions were the work, in spite of watching the video, I still feel that is true.

  • A Conversation with Ed Catmull – ACM Queue

    EC: Here are the things I would say in support of that. One of them, which I think is really important—and this is true especially of the elementary schools—is that training in drawing is teaching people to observe.
    PH: Which is what you want in scientists, right?
    EC: Thats right. Or doctors or lawyers. You want people who are observant. I think most people were not trained under artists, so they have an incorrect image of what an artist actually does. Theres a complete disconnect with what they do. But there are places where this understanding comes across, such as in that famous book by Betty Edwards [Drawing on the Right Side of the Brain].

    via A Conversation with Ed Catmull – ACM Queue.

    This interview is with a computer scientist named Ed Catmull. In the time Ed Catmull entered the field, we’ve gone from computers crunching numbers like a desktop calculator to computers doing full 3D animated films. Ed Catmull’s single most important goal was to created an animated film using a computer. He eventually accomplished that and more onced he helped form up Pixar. All of his research and academic work was focused on that one goal.

    I’m always surprised to see what references or influences people quote in interviews. In fact, I am really encouraged. It was about 1988 or so when I took a copy of Betty Edward’s book my mom had and started reading it and doing some of the exercises in it. Stranger still I want back to college and majored in art (not drawing but Photography). So I think I understand exactly what Ed Catmull means when he talks about being observant. In every job I’ve had computer related or otherwise that ability to be observant just doesn’t exist in a large number of people. Eventually people begin to ask me how do know all this stuff, when did you learn it? Most times, the things they are most impressed by are things like noticing something and trying a different strategy in attempting to fix a problem. The proof is, I can do this with things I am unfamiliar with and usually make some headway towards fixing a thing. Whether that thing is mechanical, or computer related doesn’t matter. I make good guesses and it’s not because I’m an expert in anything, I merely notice things. That’s all it is.

    So maybe everyone should read and go through Betty Edwards’s book Drawing on the Right Side of the Brain. If nothing else it might make you feel a little dislocated and uncomfortable. It might shake you up, and make you question some pre-conceived notions about yourself like, the feeling you can’t draw or you are not good at art. I think with practice, anyone can draw and with practice anyone can become observant.

  • TidBITS Opinion: A Eulogy for the Xserve: May It Rack in Peace

     

    Image representing Apple as depicted in CrunchBase
    Image via CrunchBase

     

    Apple’s Xserve was born in the spring of 2002 and is scheduled to die in the winter of 2011, and I now step up before its mourners to speak the eulogy for Apple’s maligned and misunderstood server product.

    via TidBITS Opinion: A Eulogy for the Xserve: May It Rack in Peace.

    Chuck Goolsbee’s Eulogy is spot on, and every point is true according even to my limited experience. I’ve purchased 2 different Xserves since they were introduced. On is 2nd generation G4 model, the other is a 2006 Intel model (thankfully I skipped the G5 altogether). Other than a weird bug in the Intel based Xserve (weird blue video screen), there have been no bumps or quirks to report. I agree that form factor of the housing is way too long. Even in the rack I used (a discard SUN Microsystems unit),  the thing was really inelegant. Speaking of the drive bays too is a sore point for me. I have wanted dearly to re-arrange reconfigure and upgrade the drive bays on both the old and newer Xserve but the expense of acquiring new units was prohibitive at best, and they went out of manufacture very quickly after being introduced. If you neglected to buy your Xserve fully configured with the maximum storage available when it shipped you were more or less left to fend for yourself. You could troll Ebay and Bulletin Boards to score a bona fide Apple Drivebay but the supply was so limited it drove up prices and became a black market. The XRaid didn’t help things either, as drivebays were not consistently swappable from the Xserve to the XRaid box. Given the limited time most sysadmins have with doing research on purchases like this to upgrade an existing machine, it was a total disaster, big fail and unsurprising.

    I will continue to run my Xserve units until the drives or power supplies fail. It could happen any day, any time and hopefully I will have sufficient warning to get a new Mac mini server to replace it. Until then, I too, along with Chuck Goolsbee among the rest of the Xserve sysadmins will kind of wonder what could have been.

  • Intel lets outside chip maker into its fabs • The Register

     

    Banner image Achronix 22i
    Intel and Achronix-2 Great tastes that taste great together

     

    According to Greg Martin, a spokesman for the FPGA maker, Achronix can compete with Xilinx and Altera because it has, at 1.5GHz in its current Speedster1 line, the fastest such chips on the market. And by moving to Intel’s 22nm technology, the company could have ramped up the clock speed to 3GHz.

    via Intel lets outside chip maker into its fabs • The Register.

    That kind of says it all in one sentence, or two sentences in this case. The fastest FPGA on the market is quite an accomplishment unto itself. Putting that FPGA on the world’s most advanced production line and silicon wafter technology is what Andy Grove would called the 10X Effect. FPGA’s are reconfigurable processors that can have their circuits re-routed and optimized for different tasks over and over again. This is real beneficial for very small batches of processors where you need a custom design. Some of the things they can speed up is doing math or looking up things in a very large search through a database. In the past I was always curious whether they could be used a general purpose computer which could switch gears and optimize itself for different tasks. I didn’t know whether or not it would work or be worthwhile but it really seemed like there was a vast untapped reservoir of power in the FPGA.

    Some super computer manufacturers have started using FPGAs as special purpose co-processors and have found immense speed-ups as a result. Oil prospecting companies have also used them to speed up analysis of seismic data and place good bets on dropping a well bore in the right spot. But price has always been a big barrier to entry as quoted in this article. $1,000 per chip is the cost. Which limits the appeal to those buyers where price is no object but speed and time are more important. The two big competitors in the field off FPGA manufacturing are Altix and Xilinx both of which design the chips but have them manufactured in other countries. This has led to FPGAs being second class citizens used older generation chip technologies on old manufacturing lines. They always had to deal with what they could get. Performance in terms of clock speed was always less too.

    It was not unusual to see during the Megahertz and Gigahertz wars chip speeds increasing every month. FPGAs sped up too, but not nearly as fast. I remember seeing 200Mhz/sec and 400Mhz/sec touted as Xilinx and Altix top of the line products. With Achrnix running at 1.5Ghz, things have changed quite a bit. That’s a general purposed CPU speed in a completely customizable FPGA. This means you get speed that makes the FPGA even more useful. However, instead of going faster this article points out people would rather buy the same speed but use less electricity and generate less heat. There’s no better way to do this than to shrink the size of the circuits on the FPGA and that is the core philosophy of Intel Inc. They have just teamed up to put the Achronix FPGA on the smallest feature size production line using the most optimized, cost conscious manufacturer of silicon chips bar none.

    Another point being made in the article is the market for FPGAs at this level of performance also tends to be more defense contract oriented. As a result, to maintain the level of security necessary to sell chips to this industry, the chips need to be made in the good ol’ USA and Intel doesn’t outsource anything when it comes to it’s top of the line production facilities. Everything is in Oregon, Arizona or Washington State and is guaranteed not to have any secret backdoors built in to funnel data to foreign governments.

    I would love to see some University research projects start looking at FPGAs again and see if as speeds go up, power goes down if there’s a happy medium or mix of general purpose CPUs and FPGAs that might help the average joe working on his desktop, laptop or iPad. All I know is Intel entering a market will make it more competitive and hopefully lower the barrier of entry to anyone who would really like to get their hands on a useful processor that they can customize to their needs.

  • Intel forms flash gang of five • The Register

    Intel, Dell, EMC, Fujitsu and IBM are forming a working group to standardise PCIe-based solid state drives SSD, and have a webcast coming out today to discuss it.

    via Intel forms flash gang of five • The Register.

    Now this is interesting in that just two weeks after Angelbird pre-announces its own PCIe flash based SSD product, now Intel is forming a consortium. Things are heating up, this is now a hot new category and I want to draw your attention to a sentence in this Register article:

    By connecting to a server’s PCIe bus, SSDs can pour out their contents faster to the server than by using Fibre Channel or SAS connectivity. The flash is used as a tier of memory below DRAM and cuts out drive array latency when reading and writing data.

    This is without a doubt the first instance I have read that there is a belief, even just in the minds of the author of this article, that Fibre Channel and Serial Attached SCSI aren’t fast enough. Who knew PCI Express would be preferable to an old storage interface when it comes to enterprise computing? Lookout world, there’s a new sheriff in town and his name is PCIe SSD. This product category though will be not for the consumer end of the market at least not for this consortium. It is targeting the high margin, high end, data center market where interoperability keeps vendor lock-in from occurring. By choosing interoperability everyone has to gain an advantage not through engineering necessarily but through firmware most likely. If that’s the differentiator than whomever has the best embedded programming team will have the best throughput and the highest rated product. Let’s hope this all eventually finds a market saturation point driving the technology down into the consumer desktop, thus enabling a next big burst in desktop computer performance. I hope PCIe SSD’s become the next storage of choice and that motherboards can be rid of all SATA disk I/O ports and firmware in the near future. We don’t need SATA SSDs, we do need PCIe SSDs.