Category: technology

General technology, not anything in particular

  • Chip upstart Tilera in the news

    Diagram Of A Partial Mesh Network
    Diagram Of A Partial Mesh Network

    As of early 2010, Tilera had over 50 design wins for the use of its SoCs in future networking and security appliances, which was followed up by two server wins with Quanta and SGI. The company has had a dozen more design wins since then and now claims to have over 150 customers who have bought prototypes for testing or chips to put into products.

    via Chip upstart Tilera lines up $45m in funding • The Register.

    There’s not been a lot of news about Tilera most recently, but they are still selling products, raising funds through private investments. Their product road map is showing great promise as well. I want to see more of their shipping product get tested in the online technology website arena. I don’t care if Infoworld, Network World, Tom’s Hardware or Anandtech does it. Whether it’s security devices or actual multi-core servers it would be cool to see Tilera compared even if it was an apples and oranges type of test. On paper it appears the mesh network of Tilera’s multi-core cpus is designed to set it apart from any other product currently available on the market. Similarly the ease of accessing the cores through the mesh network is meant to make the use of a single system image much easier as it is distributed across all the cores almost invisibly. In a word Tilera and its next closest competitor SeaMicro are cloud computing in a single solitary box.

    Cloud computing for those who don’t know is an attempt to create a utility like the water system or electrical system in the town where you live. The utility has excess capacity, and what it doesn’t use it sells off to connected utility systems. So you always will have enough power to cover your immediate needs with a little in reserve for emergencies. On the days where people don’t use as much electricity you cut back on production a little or sell off the excess to someone who needs it. Now imagine that electricity is computer cycles doing additions, subtractions or longer form mathematical analysis all in parallel and scaling out to extra computer cores as needed depending on the workload. Amazon has a service they sell like this already, Microsoft too. You sign up to use their ‘compute cloud’ and load your applications, your data and just start crunching away while the meter runs. You get billed based on how much of the computing resource you used.

    Nowadays, unfortunately, in data centers you got single purpose servers doing one thing, sitting idle most of the time. This has been a going concern so much so that a whole industry has cropped up of splitting those machines into thinner slices with software like VMWare. Those little slivers of a real computer then take up all the idle time of that once single purpose machine and occupy a lot more of its resources. But you still have that full-sized, hog of an old desktop tower now sitting in a 19 inch rack, generating heat and sucking up too much power. Now it’s time to scale down the computer again and that’s where Tilera comes in with it’s multi-core, low power, mesh-networked cpus. And investment partners are rolling in as a result of the promise for this new approach!

    Numerous potential customers, venture capital outfits, and even fabrication partners are jumping in to provide a round of funding that wasn’t even really being solicited by the company. Tilera just had people falling all over themselves writing checks to get a piece of the pie before things take off. It’s a good sign in these stagnant times for startup companies. And hopefully this will buy more time for the roadmap to future cpus from the company hopefully scaling up to the 200 core cpu that would be peak achievement in this quest for high performance, low-power computing.

  • The Sandy Bridge Review: Intel Core i7-2600K – AnandTech

    Quick Sync is just awesome. Its simply the best way to get videos onto your smartphone or tablet. Not only do you get most if not all of the quality of a software based transcode, you get performance thats better than what high-end discrete GPUs are able to offer. If you do a lot of video transcoding onto portable devices, Sandy Bridge will be worth the upgrade for Quick Sync alone.

    For everyone else, Sandy Bridge is easily a no brainer. Unless you already have a high-end Core i7, this is what youll want to upgrade to.

    via The Sandy Bridge Review: Intel Core i7-2600K, i5-2500K and Core i3-2100 Tested – AnandTech :: Your Source for Hardware Analysis and News.

    Previously in this blog I have recounted stories from Tom’s Hardware and Anandtech.com surrounding the wicked cool idea of tapping the vast resources contained within your GPU while you’re not playing video games. Producers of GPUs like nVidia and AMD both wanted to market their products to people who not only gamed but occasionally ripped video from DVDs and played them back on ipods or other mobile devices. The amount of time sunk into doing these kinds of conversions were made somewhat less of a pain due to the ability to run the process on a dual core Wintel computer, browsing web pages  while re-encoding the video in the background. But to get better speeds one almost always needs to monopolize all the cores on the machine and free software like HandBrake and others will take advantage of those extra cores, thus slowing your machine, but effectively speeding up the transcoding process. There was hope that GPUs could accelerate the transcoding process beyond what was achievable with a multi-core cpu from Intel. An example is also Apple’s widespread adoption of OpenCL as a pipeline to the GPU to send rendering requests for any video frames or video processing that may need to be done in iTunes, QuickTime or the iLife applications. And where I work, we get asked to do a lot of transcoding of video to different formats for customers. Usually someone wants a rip from a DVD that they can put on a flash drive and take with them into a classroom.

    However, now it appears there is a revolution in speed in the works where Intel is giving you faster transcodes for free. I’m talking about Intel’s new Quick Sync technology using the integrated graphics core as a video transcode accelerator. The speeds of transcoding are amazingly fast and given the speed, trivial to do for anyone including the casual user. In the past everyone seemed to complain about how slow their computer was especially for ripping DVDs or transcoding the rips to smaller more portable formats. Now, it takes a few minutes to get an hour of video into the right format. No more blue Monday. Follow the link to the story and analysis from Anandtech.com as they ran head to head comparisons of all the available techniques of re-encoding/transcoding a Blue-ray video release into a smaller .mp4 file encoded in as h.264. They did comparisons of Intel four-core cpus (which took the longest and got pretty good quality) versus GPU accelerated transcodes, versus the new Intel QuickSync technology coming out soon on the Sandy Bridge gen Intel i7 cpus. It is wicked cool how fast these transcodes are and it will make the process of transcoding trivial compared to how long it takes to actually ‘watch’ the video you spent all that time converting.

    Links to older GPU accelerated video articles:

    https://carpetbomberz.com/2008/06/25/gpu-accelerated-h264-encoding/
    https://carpetbomberz.com/2009/06/12/anandtech-avivo/
    https://carpetbomberz.com/2009/06/23/vreveal-gpu/
    https://carpetbomberz.com/2010/10/18/microsoft-gpu-video-encoding-patent/

  • Next-Gen SandForce Controller Seen on OCZ SSD

    Image representing SandForce as depicted in Cr...
    Image via CrunchBase

    Last week during CES 2011, The Tech Report spotted OCZ’s Vertex 3 Pro SSD–running in a demo system–using a next-generation SandForce SF-2582 controller and a 6Gbps Serial ATA interface. OCZ demonstrated its read and write speeds by running the ATTO Disk Benchmark which clearly showed the disk hitting sustained read speeds of 550 MB/s and sustained write speeds of 525 MB/s.

    via Next-Gen SandForce Controller Seen on OCZ SSD.

    Big news, test samples of the SandForce SF-2000 series flash memory controllers are being shown in products demoed at the Consumer Electronics Shows. And SSDs with SATA interfaces are testing through the roof. The numbers quoted for a 6GB/sec. SATA SSD are in the 500+GB/sec. range. Previously you would need to choose a PCIe based SSD drive from OCZ or Fusion-io to get anywhere near that high of  speed sustained. Combine this with the future possibility of SF-2000 being installed on future PCIe based SSDs and there’s no telling how much the throughput will scale. If four of the Vertex drives were bound together as a RAID 0 set with SF-2000 drive controllers managing it, is it possible to see a linear scaling of throughput. Could we see 2,000 MB/sec. on PCIe 8x SSD cards? And what would be the price on such a card fully configured with 1.2 TB of SSD drives? Hard to say what things may come, but just the thought of being able to buy retail versions of these makes me think a paradigm shift is in the works that neither Intel nor Microsoft are really thinking about right now.

    One comment on this article as posted on the original website, Tom’s Hardware, included the observation that the speeds quoted for this SATA 6GBps drive are approaching the memory bandwidth of several generations old PC-133 DRAM memory chips. And as I have said previously, I still have an old first generation Titanium Powerbook from Apple that uses that same memory chip standard PC-133. So given that SSD hard drives are fast approaching the speed of somewhat older main memory chips I can only say we are fast approaching a paradigm shift in desktop and enterprise computing. I dub thee, the All Solid State (ASS) era where no magnetic or rotating mechanical media enter into the equation. We run on silicon semiconductors from top to bottom, no Giant Magneto-Resistive technology necessary. Even our removable media are flash memory based USB drives we put in our pockets and walk around with on key chains.

  • CES 2011: Corsair Performance Series 3 SSD Benchmarks – AnandTech :: Your Source for Hardware Analysis and News

    Image representing SandForce as depicted in Cr...
    Image via CrunchBase

    The next wave of high end consumer SSDs will begin shipping this month, and I believe Corsair may be the first out the gate. Micron will follow shortly with its C400 and then we’ll likely see a third generation offering from Intel before eventually getting final hardware based on SandForce’s SF-2000 controllers in May.

    via CES 2011: Corsair Performance Series 3 SSD Benchmarks – AnandTech :: Your Source for Hardware Analysis and News.

    This just in from Consumer Electronics Show in Las Vegas, via Anandtech. SandForce SF-2000 scheduled to drop in May of this year. Get ready as you will see a huge upsurge in releases of new SSD products attempting to best one another in the sustained Read/Write category. And I’m not talking just SSDs but PCIe based cards with SSD RAIDs embedded on them communicating through a 2 Lane 8X PCI Express interface. I’m going to take a wild guess and say you will see products fitting this description easily hitting 700 to 900 MB/s sustained Read and Write. Prices will be on the top end of the scale as even the current shipping products all fall in to the $1200 to $1500 range. Expect the top end to be LSI based products for $15,000 or third party OEM manufacturers who might be willing to sell a fully configured 1TByte card for maybe ~$2,000. After the SF-2000 is released, I don’t know how long it will take for designers to prototype and release to manufacturing any new designs incorporating this top of the line SSD flash memory controller. It’s possible as the top end continues to increase in performance current shipping product might start to fall in price to clear out the older, lower performance designs.

  • Micron’s ClearNAND: 25nm + ECC

    Image representing Intel as depicted in CrunchBase
    Intel is a partner wish Micron

    Micron’s ClearNAND: 25nm + ECC, Combats Increasing Error Rates – AnandTech

    This is a really good technical article on attempts made by Micron and Intel to fix read/write errors in their Solid State memory based on Flash memory chips. Each revision of their design and materials for manufacture helps decrease the size of the individual memory cells on the flash memory chip however as the design rules (the distance between the wires) decrease, random errors increase. And the materials themselves suffer from fatigue with each read and write cycle. The fatigue is due in no small part (pun intended) on the size, specifically thickness of some layers in the sandwich that make up a flash memory cell. Thinner materials just wear out quicker. Typically this wearing out was addressed by adding extra unused memory cells that could act as a spare memory cell whenever one of them finally gave up the ghost, stopped working altogether. Another technique is to spread reads/writes over an area much greater than (sometimes 23% bigger) than the size of the storage on the outside of the packing. This is called wear levelling and it’s like rotating your tires to ensure they don’t start to get bare patches on them too quickly.

    All these techniques will only go so far as the sizes and thickness continue to shrink. So taking a chapter out of the bad old days of computing, we are back into Error Correcting Codes or ECC. When memory errors were common and you needed to guarantee your electronic logic was not creating spontaneous errors, bits of data called parity bits would be woven into all the operations to insure something didn’t accidentally flip from being a 1 to a 0. ECC memory is still widely used in data center computers that need to guarantee the spontaneous bits don’t get flipped by say, a stray cosmic ray raining down upon us. Now however ECC is becoming the next tool after spare memory cells and wear leveling to insure flash memory can continue to grow smaller and still be reliable.

    Two methods in operation today are to build the ECC memory controllers into the Flash memory modules themselves. This raises the cost of the chip, but lowers the cost to the manufacturer of a Solid State Disk or MP3 player. They don’t have to add the error correction after the fact or buy another part and integrate it into their design. The other more ‘state of the art’ method is to build the error correction into the Flash memory controller (as opposed to the memory cells), providing much more leeway in how it can be implemented, updated over time. As it turns out the premier manufacturer/designer of Flash memory controllers SandForce already does this with the current shipping version of their SF-1200 Flash memory controller. SandForce still has two more advanced controllers yet to hit the market, so they are only going to become stronger if they have already adopted ECC into their current shipping product.

    Which way the market chooses to go will depend on how low the target price is for the final shipping product. Low margin, high volume goods will most likely go with no error correction and take their chances. Other higher end goods may adopted the embedded ECC from Micron and Intel. Top of the line data center purchasers will not stray far from the cream of the crop, high margin SandForce controllers as they are still providing great performance/value even in their early generation products.

  • 450mm chip wafers | Electronista

    Image:Wafer 2 Zoll bis 8 Zoll.jpg uploaded by ...
    Image via Wikipedia

    Intel factory to make first 450mm chip wafers | Electronista.

    Being a student of the history of technology I know that the silicon semiconductor industry has been able to scale production according to Moore’s Law. However apart from the advances in how small the transistors can be made (the real basis of Moore’s Law), the other scaling factor has been the size of the wafers. Back in the old days silicon crystals had to be drawn out from a furnace at a very even steady rate which forced them to be thin cylinders 1-2″ in diameter. However as techniques improved (including a neat trick where the crystal was re-melted to purify it) the crystals increase in diameter to a nice 4″ size that helped bring down costs. Then came the big migration to 6″ wafers, 8″ and now the 300mm wafer (roughly 12″). Now Intel is still on its freight train to further bring down costs by moving the wafers up to the next largest size (450mm) and is stilling shrinking the parts (down to an unbelievably skinny 22nm in size). As the wafers continue to grow, the cost of processing equipment goes up and the cost of the whole production facility will too. The last big price point for a new production fab for Intel was always $2Billion. There may be multiple production lines in that Fab, but you needed to always have upfront that required money in order to be competitive. And Intel was more than competitive, it could put 3 lines into production in 3 years (blowing the competition out of the water for a while) and make things very difficult in the industry.

    Where things will really shake up is in the Flash memory production lines. The size of the design rulings for current flash memory chips at Intel is right around 22nm. Intel and Samsung both are trying to shrink down the feature sizes of all the circuits on their Single and Multi-Level Flash memory chips. Add to this the stacking of chips into super sandwiches and you find they can glue together 8 of their 8Gbyte chips, making for a single very thin 64Gbyte memory chip. This chip is then mated up to a memory controller and voila, the iPhone suddenly hits 64Gbytes of storage for all your apps and mp4’s from iTunes. Similarly on the hard drive end of the scale things will also wildly improve. Solid State Disk capacities should creep upwards further (beyond the top of the line 512Gbyte SSDs) as will the PCI Express based storage devices (probably doubling in capacity to 2 TeraBytes) after 450mm wafers take hold across the semiconductor industry. So it’s going to be a big deal if Chinese, Japanese and American companies get on the large silicon wafer bandwagon.

  • Hitachi GST ends STEC’s monopoly • The Register

    Hitachi GST flash drives are hitting the streets and, at last, ending STEC’s monopoly in the supply of Fibre Channel interface SSDs.

    EMC startled the enterprise storage array world by embracing STEC SSDs (solid state drives) in its arrays last year as a way of dramatically lowering the latency for access to the most important data in the arrays. It has subsequently delivered FAST automated data movement across different tiers of storage in its arrays, ensuring that sysadms don’t have to involved in managing data movement at a tedious and time-consuming level.

    via Hitachi GST ends STEC’s monopoly • The Register.

    In the computer world the data center is often the measure of all things in terms of speed and performance. Time was, the disk drive interface of choice was the SCSI drive and then it’s higher speed evolutions Fast/Wide UltraSCSI. But then a new interface hit that used fibre optic cables to move storage out of the computer box to a separate box that managed all the hard drives in one spot and this was called a Storage Array. The new connector/cable combo was named Fibre Channel and it was fast, fast, fast. It become the absolute brand name off all vendors trying to sell more and more hard drives into the data center. Newer evolved versions of Fibre Channel came to market, each one slightly faster than the rest. And eventually Fibre Channel was built right into the hard drives themselves, so that you could be assured the speed was native Fibre Channel 3Gigabytes per second from one end to the other. But Fibre Channel has always been prohibitively expensive though a lot of it has been sold over the years. Volume has not brought down the price of Fibre Channel one bit in the time that it’s been the most widely deployed disk drive interface. A few competitors have cropped up the old Parallel ATA and Serial ATA drives from the desktop market have attempted to compete. And a newer SCSI drive interface called Serial Attached SCSI is now seeing some wider acceptance. However the old guard who are mentally and emotionally attached to their favorite Fibre Channel drive interface are not about to give up even has spinning disk speeds have been trumped by the almighty Flash memory based solid state drive (SSD). And a company named STEC knew it could sell a lot of SSDs if only someone could put a Fibre Channel interface on the circuit board, allaying any fears of the Fibre Channel adherents that they needed to evolve and change.

    Yes it’s true STEC was the only game in town for what I consider the Fibre Channel legacy interface in old-line Storage Array manufacturers. They have sold tons of their drives to third parties who package up their wares into turnkey ‘Enterprise’ solutions for drive arrays and cache controllers (all of which just speed up things). And being the first-est with the most-est is a good business strategy until the second source of your product comes online. So it’s always a race to sell as much as you can until the deadline hits and everyone rushes to the second source. Here now is Hitachi’s announcement they are now manufacturing an SSD with a Fibre Channel interface onboard for the Enterprise data center customers.

  • LSI Launches $11,500 SSD, Crushes Other SSDs

    Tuesday LSI Corp announced the WarpDrive SLP-300 PCIe-based acceleration card, offering 300 GB of SLC solid state storage and performance up to 240,000 sustained IOPS. It also delivers I/O performance equal to hundreds of mechanical hard drives while consuming less than 25W of power–all for a meaty $11,500 USD.

    via LSI Launches $11,500 SSD, Crushes Other SSDs.

    This is the cost of entry for anyone working on an Enterprise Level project. You cannot participate unless you can cross the threshold of a PCIe card costing $11,500 USD. This is the first time I have seen an actual price quote on one of these cards that swims in the Data center consulting and provisioning market. Fusion-io cannot be too far off of this price when it’s not sold as a full package as part of a larger project RFP. I am somewhat stunned at the price premium, but LSI is a top engineering firm and they definitely can design their own custom silicon to get the top speed out of just about any commercial off the shelf Flash memory chips. I am impressed they went with the PCI Express (8X) 8 lane interface. I’m guessing that’s a requirement for more server owners whereas 4X is for the desktop market. Still I don’t see any 16X interfaces as of yet (that’s the interface most desktops use for their graphics cards from AMD and nVidia). One more part that makes this a premium offering is the choice of Single Level Cell Flash memory chips for the ultimate in speed and reliability along with the Serial Attached Storage interface onboard the PCIe card itself. Desktop models opt for SATA to PCI-X to PCI-e bridge chips forcing you to translate and re-order your data multiple times. I have a feel SAS bridges to PCI-e at the full 8X interface speeds and that is the key to getting faster than 1,000 MB/sec. speeds for write and reads. This part is quoted as getting in the range of ~1,400 MB/sec. and other than some very expensive turnkey boxes from manufacturers like Violin, this is a great user installable part to get the benefit of a really fast SSD drive array on a PCIe card.

  • A Conversation with Ed Catmull – ACM Queue

    EC: Here are the things I would say in support of that. One of them, which I think is really important—and this is true especially of the elementary schools—is that training in drawing is teaching people to observe.
    PH: Which is what you want in scientists, right?
    EC: Thats right. Or doctors or lawyers. You want people who are observant. I think most people were not trained under artists, so they have an incorrect image of what an artist actually does. Theres a complete disconnect with what they do. But there are places where this understanding comes across, such as in that famous book by Betty Edwards [Drawing on the Right Side of the Brain].

    via A Conversation with Ed Catmull – ACM Queue.

    This interview is with a computer scientist named Ed Catmull. In the time Ed Catmull entered the field, we’ve gone from computers crunching numbers like a desktop calculator to computers doing full 3D animated films. Ed Catmull’s single most important goal was to created an animated film using a computer. He eventually accomplished that and more onced he helped form up Pixar. All of his research and academic work was focused on that one goal.

    I’m always surprised to see what references or influences people quote in interviews. In fact, I am really encouraged. It was about 1988 or so when I took a copy of Betty Edward’s book my mom had and started reading it and doing some of the exercises in it. Stranger still I want back to college and majored in art (not drawing but Photography). So I think I understand exactly what Ed Catmull means when he talks about being observant. In every job I’ve had computer related or otherwise that ability to be observant just doesn’t exist in a large number of people. Eventually people begin to ask me how do know all this stuff, when did you learn it? Most times, the things they are most impressed by are things like noticing something and trying a different strategy in attempting to fix a problem. The proof is, I can do this with things I am unfamiliar with and usually make some headway towards fixing a thing. Whether that thing is mechanical, or computer related doesn’t matter. I make good guesses and it’s not because I’m an expert in anything, I merely notice things. That’s all it is.

    So maybe everyone should read and go through Betty Edwards’s book Drawing on the Right Side of the Brain. If nothing else it might make you feel a little dislocated and uncomfortable. It might shake you up, and make you question some pre-conceived notions about yourself like, the feeling you can’t draw or you are not good at art. I think with practice, anyone can draw and with practice anyone can become observant.

  • TidBITS Opinion: A Eulogy for the Xserve: May It Rack in Peace

     

    Image representing Apple as depicted in CrunchBase
    Image via CrunchBase

     

    Apple’s Xserve was born in the spring of 2002 and is scheduled to die in the winter of 2011, and I now step up before its mourners to speak the eulogy for Apple’s maligned and misunderstood server product.

    via TidBITS Opinion: A Eulogy for the Xserve: May It Rack in Peace.

    Chuck Goolsbee’s Eulogy is spot on, and every point is true according even to my limited experience. I’ve purchased 2 different Xserves since they were introduced. On is 2nd generation G4 model, the other is a 2006 Intel model (thankfully I skipped the G5 altogether). Other than a weird bug in the Intel based Xserve (weird blue video screen), there have been no bumps or quirks to report. I agree that form factor of the housing is way too long. Even in the rack I used (a discard SUN Microsystems unit),  the thing was really inelegant. Speaking of the drive bays too is a sore point for me. I have wanted dearly to re-arrange reconfigure and upgrade the drive bays on both the old and newer Xserve but the expense of acquiring new units was prohibitive at best, and they went out of manufacture very quickly after being introduced. If you neglected to buy your Xserve fully configured with the maximum storage available when it shipped you were more or less left to fend for yourself. You could troll Ebay and Bulletin Boards to score a bona fide Apple Drivebay but the supply was so limited it drove up prices and became a black market. The XRaid didn’t help things either, as drivebays were not consistently swappable from the Xserve to the XRaid box. Given the limited time most sysadmins have with doing research on purchases like this to upgrade an existing machine, it was a total disaster, big fail and unsurprising.

    I will continue to run my Xserve units until the drives or power supplies fail. It could happen any day, any time and hopefully I will have sufficient warning to get a new Mac mini server to replace it. Until then, I too, along with Chuck Goolsbee among the rest of the Xserve sysadmins will kind of wonder what could have been.