Category: computers

Interesting pre-announced products that may or may not ship, and may or may not have an impact on desktop/network computing

  • AppleInsider | Rumor: Apple investigating USB 3.0 for Macs ahead of Intel

    USB Connector

    A new report claims Apple has continued to investigate implementing USB 3.0 in its Mac computers independent of Intels plans to eventually support USB 3.0 at the chipset level.

    via AppleInsider | Rumor: Apple investigating USB 3.0 for Macs ahead of Intel.

    This is interesting to read, I have not paid much attention to USB 3.0 due to how slowly it has been adopted by the PC manufacturing world. But in the past Apple has been quicker to adopt some mainstream technologies than it’s PC manufacturing counterparts. The value add is increased as more and more devices also adopt the new interface, namely anything that runs the iOS. The surest sign there’s a move going on will be whether or not there is USB 3.0 support in the iOS 5.x and whether or not there is hardware support in the next Revision of the iPhone.

    And now it appears Apple is releasing two iPhones, a minor iPhone 4 update and a new iPhone 5 at roughly the same time. Given reports that the new iPhone 5 has a lot of RAM installed, I’m curious about how much of the storage is NAND based Flash memory. Will we see something on the order of 64GB again or more this time around when the new phones are released.  The upshot is for instances where you can tether your device to sync it to the Mac, with a USB 3.0 compliant interface the file transfer speed will make the chore of pulling out the cables worth the effort. However, the all encompassing sharing of data all the time between Apple devices may make the whole adoption of USB 3.0 seem less necessary if every device can find its partner and sync over the airwaves instead of over iPod connectors.

    Still it would be nice to have a dedicated high speed cable for the inevitable external Hard drive connection necessary in these days of the smaller laptops like the Macbook Air, or the Mac mini. Less space internally means these devices will need a supplement to the internal hard drive, one even that the Apple iCloud cannot fulfill especially considering the size of video files coming off each new generation of HD video cameras. I don’t care what Apple says but 250GBs of AVCHD files is going to sync very,…very,… slowly. All the more reason to adopt USB 3.0 as soon as possible.

  • Single-chip DIMM offers low-power replacement for sticks of RAM | ExtremeTech

    A 256Kx4 Dynamic RAM chip on an early PC memor...
    Image via Wikipedia

    Invensas, a subsidiary of chip microelectronics company Tessera, has discovered a way of stacking multiple DRAM chips on top of each other. This process, called multi-die face-down packaging, or xFD for short, massively increases memory density, reduces power consumption, and should pave the way for faster and more efficient memory chips.

    via Single-chip DIMM offers low-power replacement for sticks of RAM | ExtremeTech.

    Who says there’s no such thing as progress? Apart from the DDR memory bus data rates moving from DDR-3 to DDR-4 soon what have you read that was significantly different, much less better than the first gen DDR DIMMS from years ago? Chip stacking is de rigeur for manufacturers of Flash memory especially in mobile devices with limited real estate on the motherboards. This packaging has flowed back into the computer market very handily and has lead to small form factors in all the very Flash memory devices. Whether it be, Thumb drives, or aftermarket 2.5″ Laptop Solid State Disks or embedded on an mSATA module everyone’s benefiting equally.

    Wither stacking of RAM modules? I know there’s been some efforts to do this again for the mobile device market. But any large scale flow back into the general computing market has been hard to see. I’m hoping this announcement Invensas is a real shipping product eventually and not an attempt to stake a claim on intellectual property that will take the form of lawsuits against current memory designers and manufacturers. Stacking is the way to go, even if it never can be used in say a CPU, I would think clock speeds and power savings requirements on RAM modules might be sufficient to allow some stacking to occur. And if the memory access speeds improve at the same time, so much the better.

  • Angelbird Now Shipping SSD RAID Card for 800 MB/s

    If you want more speed, then you will have to look to PCI-Express for the answer. Austrian-based Angelbird has opened its online storefront with its Wings add-in card and SSDs.

    via Angelbird Now Shipping SSD RAID Card for 800 MB/s.

    After more than one year of being announced Angelbird has designed and manufactured a new PCIe flash card. The design of which is full expandable over time depending on your budget needs. Fusion-io has a few ‘expandable’ cards in its inventory too, but the price class of Fusion-io is much higher than the consumer level Angelbird product. So if you cannot afford to build a 1TB flash-based PCIe card, do not worry. Buy what you can and outfit it later over time as your budget allows. Now that’s something any gamer fanboy or desktop enthusiast can get behind.

    Angelbird does warn in advance power demands for typical 2.5″ SATA flash modules are higher than what the PCIe bus can provide typically. They recommend using their own memory modules to add onto their base level PCIe card. Up until I read those recommendations I had forgotten some of the limitations and workarounds Graphics Card manufacturers typical use. These have become so routine that there are now 2-3 extra power taps provided even by typical desktop manufacturers for their desktop machines. All this to accommodate the extra graphics chip power required by today’s display adapters. It makes me wonder if Angelbird could do a Rev. of the base level PCIe card with a little 4-pin power input or something similar. It’s doesn’t need another 150watts, it’s going to be closer to 20watts for this type of device I think. I wish Angelbird well and I hope sales start strong so they can sell out their first production run.

  • David May, parallel processing pioneer • reghardware

    INMOS T800 Transputer
    Image via Wikipedia

    The key idea was to create a component that could be scaled from use as a single embedded chip in dedicated devices like a TV set-top box, all the way up to a vast supercomputer built from a huge array of interconnected Transputers.

    Connect them up and you had, what was, for its era, a hugely powerful system, able to render Mandelbrot Set images and even do ray tracing in real time – a complex computing task only now coming into the reach of the latest GPUs, but solved by British boffins 30-odd years ago.

    via David May, parallel processing pioneer • reghardware.

    I remember the Transputer. I remember seeing ISA-based add-on cards for desktop computers back in the early 1980s. They would advertise in the back of the popular computer technology magazines of the day. And while it seemed really mysterious what you could do with a Transputer, the price premium to buy those boards made you realize it must have been pretty magical.

    Most recently while I was attending workshop in Open Source software I met a couple form employees of  a famous manufacturer of camera film. In their research labs these guys used to build custom machines using arrays of Transputers to speed up image processing tasks inside the products they were developing. So knowing that there’s even denser architectures using chips like Tilera, Intel Atom and ARM chips absolutely blows them away. The price/performance ratio doesn’t come close.

    Software was probably the biggest point off friction in that the tools to integrate the Transputer into the overall design required another level of expertise. That is true to of the General Purpose Graphics Processing Unit (GPGU) that nVidia championed and now markets with its Tesla product line. And the Chinese have created a hybrid supercomputer mating Tesla boards up with commodity cpus. It’s too bad that the economics of designing and producing the Transputer didn’t scale with the time (the way it has for Intel as a comparison). Clock speeds also fell behind too, which allowed general purpose micro-processors to spend the extra clock cycles performing the same calculations only faster. This is also the advantage that RISC chips had until they couldn’t overcome the performance increases designed in by Intel.

  • From Big Data to NoSQL: Part 2 (from ReadWriteWeb)

    Image representing ReadWriteWeb as depicted in...
    Image via CrunchBase

    In this section we’ll talk about data warehouses, ACID compliance, distributed databases and more.

    via From Big Data to NoSQL: The ReadWriteWeb Guide to Data Terminology Part 2.

    After linking to the Part 1 of this series of articles on ReadWriteWeb (all the way back in May), today there’s yet more terminology and info for the enterprising, goal-oriented technologists. Again, there’s some good info and a diagram to explain some of the concepts, and what makes these things different from what we are already using today. I particularly like finding out about performance benefits of these different architectures versus tables, columns and rows of traditional associative algebra driven SQL databases.

    Where I work we have lots of historic data kept on file in a Data Warehouse. This typically gets used to generate reports to show compliance, meet regulations and continue to receive government grants. For the more enterprising Information Analyst it also provides a source of  historic data for creating forecasts modeled on past activity. For the Data Scientist ir provides an opportunity to discover things people didn’t know existed within the data (Data Mining). But now that things are becoming more ‘realtime’ there’s a call for analyzing data streams as they occur instead of after the fact (Data Warehouses and Data Mining).

  • OCZ samples twin-core ARM SSD controller • The Register

    OCZ Technology
    Image via Wikipedia

    OCZ says it is available for evaluation now by OEMs and, we presume, OCZ will be using it in its own flash products. Were looking at 1TB SSDs using TLC flash, shipping sequential data out at 500MB/sec which boot quickly, and could be combined to provide multi-TB flash data stores. Parallelising data access would provide multi-GB/sec I/O. The flash future looks bright.

    via OCZ samples twin-core ARM SSD controller • The Register.

    Who knew pairing an ARM core with the drive electronics for a Flash based SSD could be so successful. Not only are the ARM chips helping to drive the cpus on our handheld devices, they are now becoming the SSD Drive controllers too! If OCZ is able to create these drive controllers with good yields (say 70% on the first run) then they are going to hopefully give themselves a pricing advantage and get a higher profit margin per device sold. This is assuming they don’t have to pay royalties for the SandForce drive controller on every device they ship.

    If OCZ was able to draw up their own drive controller, I would be surprised. However, since they have acquired Indilinx it seems like they are making good on the promise held by Indilinx’s current crop of drive controllers. Let’s just hope they are able to match the performance of SandForce at the same price points as well. Otherwise it’s nothing more than a kind of patent machine that will allow OCZ to wage lawsuits against competitors for Intellectual Property they acquired through the acquisition of Indilinx. And we have seen too much of that recently with Apple’s secret bid for Nortel’s patent pool and Google’s acquisition of Motorola.

  • Tilera routs Intel, AMD in Facebook bakeoff • The Register

    Structure of the TILE64 Processor from Tilera
    Tile64 processor from Tilera

    Facebook lined up the Tilera-based Quanta servers against a number of different server configurations making use of Intels four-core Xeon L5520 running at 2.27GHz and eight-core Opteron 6128 HE processors running at 2GHz. Both of these x64 chips are low-voltage, low power variants. Facebook ran the tests on single-socket 1U rack servers with 32GB and on dual-socket 1U rack servers with 64GB.All three machines ran CentOS Linux with the 2.6.33 kernel and Memcached 1.2.3h.

    via Tilera routs Intel, AMD in Facebook bakeoff • The Register.

    You will definitely want to read this whole story as presented El Reg. They have a few graphs displaying the performance of the Tilera based Quanta data cloud in a box versus the Intel server rack. And let me tell you on certain very specific workloads like the Web Caching using Memcached I declare advantage Tilera. No doubt data center managers need to pay attention to this and get some more evidence to back up this initial white paper from Facebook, but this is big, big news. And all one need do apart from tuning the software for the chipset is add a few PCIe based SSDs or TMS RamSan and you have what could theoretically be the fastest possible web performance possible. Even at this level of performance, there’s still room to grow I think on the hard drive storage front. What I would hope in future to see is Facebook do an exhaustive test on the Quanta SQ-2 product versus Calxeda (ARM cloud in a box) and the Seamicro SM-10000×64 (64bit Intel Atom cloud in a box). It would prove an interesting research project just to see how much chipsets, chip architectures and instruction sets play in optimizing each for a particular style and category of data center workload. I know I will be waiting and watching.

  • History of Sage

    A screenshot of Sagemath working.
    Image via Wikipedia

    The Sage Project Webpage http://www.sagemath.org/

    Sage is mathematical software, very much in the same vein as MATLAB, MAGMA, Maple, and Mathematica. Unlike these systems, every component of Sage is GPL-compatible. The interpretative language of Sage is Python, a mainstream programming language. Use Sage for studying a huge range of mathematics, including algebra, calculus, elementary to very advanced number theory, cryptography, numerical computation, commutative algebra, group theory, combinatorics, graph theory, and exact linear algebra.

    Explanation of what Sage does by the original author William Stein 

    (Long – roughly 50 minutes)

    Original Developer http://wstein.org/ and his history of Sage mathematical software development. Wiki listing http://wiki.sagemath.org/ with a list of participating commiters. Discussion lists for developers: Mostly done through Google Groups with associated RSS feeds. Mercurial Repository (start date Sat Feb 11 01:13:08 2006) Gonzalo Tornaria seems to have loaded the project in at this point. Current List of source code in TRAC with listing of commiters for the most recent release of Sage (4.7).

    • William Stein (wstein) Still very involved based on freqenecy of commits
    • Michael Abshoff (mabs) Ohloh has him ranked second only to William Stein with commits and time on project. He’s now left the project according to the Trac log.
    • Jeroen Demeyer (jdemeyer) commits a lot
    • J.H.Palmieri (palmieri) has done  number of tutorials and documentation he’s on the IRC channel
    • Minh Van Nguyen (nguyenminh2) has done some tutorials,documentation and work Categories module. He also appears to be the sysadmin on the Wiki
    • Mike Hansen (mhansen) Is on the IRC channel irc.freenode.net#sagemath and is a big contributor
    • Robert Bradshaw (robertwb) has done some very recent commits

    Changelog for the most recent release (4.7) of Sage. Moderators of irc.freenode.net#sagemath Keshav Kini (who maintains the Ohloh info) & schilly@boxen.math.washington.edu. Big milestone release of version 4.7 with tickets listed here based on modules: Click Here. And the Ohloh listing of top contributors to the project. There’s an active developer and end user community. Workshops are tracked here. Sage Days workshops tend to be hackfests for interested parties. But more importantly Developers can read up on this page, how to get started and what the process is as a Sage developer.

    Further questions that need to be considered. Look at the git repository and the developer blogs ask the following questions:

    1. Who approves patches? How many people? (There’s a large number of people responsible for reviewing patches, if I had to guess it could be 12 in total based on the most recent changelog)
    2. Who has commit access? & how many?
    3. Who is involved in the history of the project? (That’s pretty easy to figure out from the Ohloh and Trac websites for Sage)
    4. Who are the principal contributors, and have they changed over time?
    5. Who are the maintainers?
    6. Who is on the front end (user interface) and back end (processing or server side)?
    7. What have been some of the major bugs/problems/issues that have arisen during development? Who is responsible for quality control and bug repair?
    8. How is the project’s participation trending and why? (Seems to have stabilized with a big peak of 41 contribs about 2 years ago, look at Ohloh graph of commits, peak activity was 2009 and 2010 based on Ohloh graph).

    Note the period over which the Gource visualization occurs is since 2009, earliest entry in the Mercurial repository I could find was 2005. Sage was already a going concern prior to the Mercurial repository being put on the web. So the simulation doesn’t show the full history of development.

  • First Sungard goes private and now Blackboard

    The buyers include Bain Capital, the Blackstone Group, Goldman Sachs Capital Partners, Kohlberg Kravis Roberts, Providence Equity Partners and Texas Pacific Group. The group is led by Silver Lake Partners. The deal is a leveraged buyout – Sungard will be taken private and its shares removed from Wall Street.

    via Sungard goes private • The RegisterPosted in CIO29th March 2005 10:37 GMT

    RTTNews – Private equity firm Providence Equity Partners, Inc. agreed Friday to take educational software and systems provider Blackboard, Inc. (BBBB: News ) private for $45 per share in an all-cash deal of $1.64 billion.

    It would appear now that Providence Equity Partners owns two giants in the Higher Ed outsourcing industry Sungard and Blackboard. What does this mean? Will there be consolidation where there is overlap between the two companies? Will there be attempts to steal customers or upsell each other’s products?

  • SeaMicro pushes Atom smasher to 768 cores in 10U box • The Register

    Image representing SeaMicro as depicted in Cru...
    Image via CrunchBase

    An original SM10000 server with 512 cores and 1TB of main memory cost $139,000. The bump up to the 64-bit Atom N570 for 512 cores and the same 1TB of memory boosted the price to $165,000. A 768-core, 1.5TB machine using the new 64HD cards will run you $237,000. Thats 50 per cent more oomph and memory for 43.6 per cent more money. ®

    via SeaMicro pushes Atom smasher to 768 cores in 10U box • The Register.

    SeaMicro continues to pump out the jams releasing another updated chassis in less than a year. There is now a grand total of 768 processor cores jammed in that 10U high box. Which leads me to believe they have just eclipsed the compute per rack unit of the Tilera and Calxeda massively parallel cloud servers in a box. But that would wrong because Calxeda is making a 2U server rack unit hold 120-4 core ARM cpus. So that gives you a grand total of 480 in just 2 rack units alone. Multiply that by 5 and you get 2400 cores in a 10U rack serving. So advantage Calxeda in total core count, however lets also consider software too. Atom being the cpu that Seamicro has chosen all along is an intel architecture chip and an x64 architecture at that. It is the best of both worlds for anyone who already had a big investment in Intel binary compatible OSes and applications. It is most often the software and it’s legacy pieces that drive the choice of which processor goes into your data cloud.

    Anyone who had clean slate to start from might be able to choose between Calxeda versus Seamicro for their applications and infrastructure. And if density/thermal design point per rack unit is very important Calxeda too will suit your needs I would think. But who knows? Maybe your workflow isn’t as massively parallel as a Calxeda server and you might have a much lower implementation threshold getting started on an Intel system, so again advantage Seamicro. A real industry analyst would look at these two competing companies as complimentary, different architectures for different workflows.