No. 7. John Ambrose Fleming: Sir John Ambrose Fleming is the inventor of the first vacuum tube. His engineering feat is known as the precursor to electronics — even though the U.S. Supreme Court invalidated his patent.
Until I read this list, I didn’t know who invented the vacuum tube. I did however understand the incredible importance of the vacuum tube though. Especially as it applied to the early computer industry. After that the transistor took over. But oh that early time of designing circuits and working on logic! Without any of those historical antecedents we would not have the computers of today. The necessity of switching voltages from high to low is the only way to mimic the registers in an adding machine, spinning, counting off one digit at a time. Wiring those tubes up into circuits and creating logic with them was the next big leap in intuition.
Without the vacuum tube there would be no electrical engineering, no electronics industry and no devices like wireless telegraphs, wireless radio, etc. Everything hinged on this invention. So cheers to John Ambrose Fleming and the vaccuum tube. Being able to apply some kind of useful purpose to what would have been thought of as a laboratory curiosity, a magic toy to manipulate cathode rays. But somehow Fleming was able to see an application of this technology to a useful end and the rest they say is history.
The FeTRAMs are similar to state-of-the-art ferroelectric random access memories, FeRAMs, which are in commercial use but represent a relatively small part of the overall semiconductor market. Both use ferroelectric material to store information in a nonvolatile fashion, but unlike FeRAMS, the new technology allows for nondestructive readout, meaning information can be read without losing it.
I’m always pleasantly surprised to read that work is still being done on alternate materials for Random Access Memory (RAM). I was following closely developments in the category of ferroelectric RAM by folks like Samsung and HP. Very few of these products promised enough return on investment to be developed into products. And some notable efforts by big manufacturers were abandoned altogether.
If this research effort can be licensed to a big chip manufacturer and not turned into a form of patent trolling ammunition I would feel the effort was not wasted. I think too often most recently these patented technologies are not used as a means of advancing the art of computer technology. Instead they are a portfolio to a litigator seeking rent on the patented technology.
Due to the frequency of abandoned projects in the alternative DRAM technology category, I’m hoping the compatibility of this chip’s manufacturing process with existing chip making technology will be a big step forward. A paradigm shifting technology like magnetic RAM might just push us to the next big mountain top of power conservation, performance and capability that the CPU enjoyed from 1969 to roughly 2005 when chip speeds began to plateau.
In the enterprise segment where 1U and 2U servers are common, PCI Express SSDs are very attractive. You may not always have a ton of 2.5″ drive bays but theres usually at least one high-bandwidth PCIe slot unused. The RevoDrive family of PCIe SSDs were targeted at the high-end desktop or workstation market, but for an enterprise-specific solution OCZ has its Z-Drive line.
Anandtech is breaking new ground covering some Enterprise level segments of the Solid State Disk industry. While I doubt he’ll be doing ratings of Violin and Texas Memory Systems gear very soon, the OCZ low end Enterprise PCIe cards is still beginning to approach that target. We’re talking $10,000 USD and up for anyone who wants to participate. Which puts it in the middle to high end of Fusion-io and barely touches the lower end of Violin and TMS not to mention Virident. Given that, it is still wild to see what kind of architecture and performance optimization one gets for the money they pay. SandForce rules the day at OCZ for anything requiring the top speeds for write performance. It’s also interesting to find out about the SandForce 25xx series use of super-capacitors to hold enough reserve power to flush the write caches on a power outage. It’s expensive, but moves the product up a few notches in the Enterprise level reliability scale.
I’ve been a fan of Augmented Reality for a while, following the announcements from Layar over the past two years. I’m hoping out of this work comes something more than another channel for selling, advertising and marketing. But innovation always follows where the money is and artistic creative pursuits are NOT it. Witness the evolution of Layar from a toolkit to a whole package of brand loyalty add-ons ready to be sent out whole to any smartphone owner, unwitting enough to download the Layar created App.
The emphasis in this WSJ article however is not how Layar is trying to market itself. Instead they are more worried about how Layar is creating a ‘virtual’ space where meta-data is tagged onto a physical location. So a Layar Augmented Reality squatter can setup a very mundane virtual T-shirt shop (say like Second Life) in the same physical location as a high class couturier on a high street in London or Paris. What right does anyone have to squat in the Layar domain? Just like Domain Name System squatters of today, they have every right by being there first. Which brings to mind how this will evolve into a game of technical one-upsmanship whereby each Augmented Reality Domain will be subject to the market forces of popularity. Witness the chaotic evolution of social networking where AOL, Friendster, MySpace, Facebook and now Google+ all usurp market mindshare from one another.
While the Layar squatter has his T-shirt shop today, the question is who knows this other than other Layar users? Who will yet know whether anyone else will ever know? This leads me to conclude this is a much bigger deal to the WSJ than it is to anyone who might be sniped at by or squatted upon within an Augmented Reality cul-de-sac. Though those stores and corporations may not be able to budge the Layar squatters, they can at least lay claim to the rest of their empire and prevent any future miscreants from owning their virtual space. But as I say, in one-upsmanship there is no real end game, only just the NEXT game.
Amazon EC2 and other cloud services are expanding the market for high-performance computing. Without access to a national lab or a supercomputer in your own data center, cloud computing lets businesses spin up temporary clusters at will and stop paying for them as soon as the computing needs are met.
If you own your Data Center, you might be a little nervous right now as even a Data Center can be outsourced on an as needed basis. Especially if you are doing scientific computing you should consider the fixed costs of acquiring and maintaining those sunk, capital costs after the cluster is up and running. This story provides one great example of what I think the Cloud Computer could one day become. Rent-a-Center style data centers and compute clusters seem like an incredible value especially for a University but even more so for a business that may not need a to keep a real live data center under their control. Examples abound as even online services like Drop Box lease their compute cycles from the likes of Amazon Web Services and the Elastic Compute Cloud (EC2). And if migrating an application into a Data Center along with the data set to be analyzed can be sped up sufficiently and the cost kept down, who knows what might be possible.
Opportunity costs are many when it comes to having access to a sufficiently large number of nodes in a compute cluster. Mostly with modeling applications, you get to run a simulation at finer time slices, at higher resolution possibly gaining a better understanding of how close your algorithms match the real world. This isn’t just for business but for science as well and I think being saddled with a typical Data Center installation and it’s infrastructure and depreciation costs along with staffing make it seem less attractive if the big Data Center providers are willing to sell part of their compute cycles at a reasonable rate. The best part is you can shop around too. In the bad old days of batch computing and the glassed in data center, before desktops and mini-computers people were dying to get access to the machine and run their jobs. Now the surplus of computing cycles is so great for the big players, they help subsidize the costs of build-outs and redundancies by letting people bid of the spare compute cycles they have just lying around generating heat. It’s a whole new era of compute cycle auctions and I for one am dying to see more stories like this in the future.
A new report claims Apple has continued to investigate implementing USB 3.0 in its Mac computers independent of Intels plans to eventually support USB 3.0 at the chipset level.
This is interesting to read, I have not paid much attention to USB 3.0 due to how slowly it has been adopted by the PC manufacturing world. But in the past Apple has been quicker to adopt some mainstream technologies than it’s PC manufacturing counterparts. The value add is increased as more and more devices also adopt the new interface, namely anything that runs the iOS. The surest sign there’s a move going on will be whether or not there is USB 3.0 support in the iOS 5.x and whether or not there is hardware support in the next Revision of the iPhone.
And now it appears Apple is releasing two iPhones, a minor iPhone 4 update and a new iPhone 5 at roughly the same time. Given reports that the new iPhone 5 has a lot of RAM installed, I’m curious about how much of the storage is NAND based Flash memory. Will we see something on the order of 64GB again or more this time around when the new phones are released. The upshot is for instances where you can tether your device to sync it to the Mac, with a USB 3.0 compliant interface the file transfer speed will make the chore of pulling out the cables worth the effort. However, the all encompassing sharing of data all the time between Apple devices may make the whole adoption of USB 3.0 seem less necessary if every device can find its partner and sync over the airwaves instead of over iPod connectors.
Still it would be nice to have a dedicated high speed cable for the inevitable external Hard drive connection necessary in these days of the smaller laptops like the Macbook Air, or the Mac mini. Less space internally means these devices will need a supplement to the internal hard drive, one even that the Apple iCloud cannot fulfill especially considering the size of video files coming off each new generation of HD video cameras. I don’t care what Apple says but 250GBs of AVCHD files is going to sync very,…very,… slowly. All the more reason to adopt USB 3.0 as soon as possible.
Invensas, a subsidiary of chip microelectronics company Tessera, has discovered a way of stacking multiple DRAM chips on top of each other. This process, called multi-die face-down packaging, or xFD for short, massively increases memory density, reduces power consumption, and should pave the way for faster and more efficient memory chips.
Who says there’s no such thing as progress? Apart from the DDR memory bus data rates moving from DDR-3 to DDR-4 soon what have you read that was significantly different, much less better than the first gen DDR DIMMS from years ago? Chip stacking is de rigeur for manufacturers of Flash memory especially in mobile devices with limited real estate on the motherboards. This packaging has flowed back into the computer market very handily and has lead to small form factors in all the very Flash memory devices. Whether it be, Thumb drives, or aftermarket 2.5″ Laptop Solid State Disks or embedded on an mSATA module everyone’s benefiting equally.
Wither stacking of RAM modules? I know there’s been some efforts to do this again for the mobile device market. But any large scale flow back into the general computing market has been hard to see. I’m hoping this announcement Invensas is a real shipping product eventually and not an attempt to stake a claim on intellectual property that will take the form of lawsuits against current memory designers and manufacturers. Stacking is the way to go, even if it never can be used in say a CPU, I would think clock speeds and power savings requirements on RAM modules might be sufficient to allow some stacking to occur. And if the memory access speeds improve at the same time, so much the better.
If you want more speed, then you will have to look to PCI-Express for the answer. Austrian-based Angelbird has opened its online storefront with its Wings add-in card and SSDs.
After more than one year of being announced Angelbird has designed and manufactured a new PCIe flash card. The design of which is full expandable over time depending on your budget needs. Fusion-io has a few ‘expandable’ cards in its inventory too, but the price class of Fusion-io is much higher than the consumer level Angelbird product. So if you cannot afford to build a 1TB flash-based PCIe card, do not worry. Buy what you can and outfit it later over time as your budget allows. Now that’s something any gamer fanboy or desktop enthusiast can get behind.
Angelbird does warn in advance power demands for typical 2.5″ SATA flash modules are higher than what the PCIe bus can provide typically. They recommend using their own memory modules to add onto their base level PCIe card. Up until I read those recommendations I had forgotten some of the limitations and workarounds Graphics Card manufacturers typical use. These have become so routine that there are now 2-3 extra power taps provided even by typical desktop manufacturers for their desktop machines. All this to accommodate the extra graphics chip power required by today’s display adapters. It makes me wonder if Angelbird could do a Rev. of the base level PCIe card with a little 4-pin power input or something similar. It’s doesn’t need another 150watts, it’s going to be closer to 20watts for this type of device I think. I wish Angelbird well and I hope sales start strong so they can sell out their first production run.
On the surface, RSS seems great for those of us who want to keep up on everything happening on the Internet—and I mean everything. As for me, I use RSS regularly at five minute intervals for pretty much the entire time Im awake. I use RSS for both work and personal reasons—it helps me keep tabs on practically every tech site that matters in order to ensure that Im never missing anything, plus it lets me make sure Im on top of my friends and families lives via their blogs. If not for RSS, I could never keep up on anything. Or would I?
There seems to be an RSS backlash going on, starting this past Spring when a notable article came out pointing out how low the adoption rate has been. Web 2.0 seemed to be the era of more tailored, easily discovered reading content, sharing of said reading material, commenting on it and starting up conversations. Now the vast social networking phenomenon has been usurped by the gated community of Social Networking websites. You’re a member of this, that or the other new up and coming website whose features and interface blow the competition out of the water. Friendster, MySpace, Facebook, all come and go. But underneath it all, there’s the mighty RSS feed, sitting out there waiting to be subscribed to a lowly XML document with updated listings generated each time a new article gets published through a website’s content management system. There’s no obligation implied whatsoever, only the promise like Digital Video Recorders (or TiVO if you prefer) that there’s something new, you know where to find it to watch it later, and if you don’t watch it, you erase it.
In Jacqui Chen’s article she equates RSS to Email, an inbox needing to be cleared. But I ask Ms. Chen and others arguing along the same lines, do you feel obligated to watch every program captured on your DVR? It’s not the same is it. It’s different. I don’t read articles or headlines like email messages. I’ve gotten very accustomed to the ebb and flow of the blog-spammy white paper regurgitating ‘tech news’ websites. I know when 40 articles get dumped wholesale into their RSS feed that they completely misunderstand the value of their RSS feed. And so I treat them with the same level of misunderstanding and wipeout whole swaths of their clock-like dumps. Literally these outfits like C|net, NYTimes, Gawker, Kotaku, etc. will hold onto their content and dump it like huge water tank out into the RSS feed. Why not just do it piecemeal, as things are edited, researched, fact-checked, and released put them into the RSS feed. My reader will catch it when it appears, and who knows I might actually read it, as opposed to have to sift a list of 20 articles that appeared magically at 9:30AM EST.
The problem you see is not in RSS, it’s in the feeds and how they the publishers abuse and disregard the power of the feed. Holding stuff back to dump it all at once is the Old World publishing model, it’s a form of an ‘edition’. Well the printing press doesn’t need to be kept busy running a ‘batch’ of articles until the next batch comes through. And that’s what the RSS feed publishers don’t understand. Piecemeal is way more suited to the New World of publishing, you don’t need to keep the press operators busy doing a whole section of a paper anymore, so don’t hold your articles back in order to dump a huge quantity all at once. This is a River, a River of News and I for one would prefer a constant trickle than a 4 times a day torrent. This is something the Old World Web 2.0 publishers ‘STILL’ do not understand. One can only hope at the next Revolution (say Web 3.0) the publishers finally get it, and let the River of News flow once and for all time.
Also read this response to the orignal Ars Technica article: Sane RSS usage – Marco.org (September 4, 2011)
By bypassing the SATA bottleneck, OCZs RevoDrive Hybrid promises transfer speeds up to 910 MB/s and up to 120,000 IOPS 4K random write. The SSD aspect reportedly uses a SandForce SF-2281 controller and the hard drive platters spin at 5,400rpm. On a whole, the hybrid drive makes good use of the companys proprietary Virtualized Controller Architecture.
Good news on the Consumer Electronics front, OCZ continues to innovate on the desktop aftermarket introducing a new PCIe Flash product that marries a nice 1TByte Hard Drive to a 100GB flash-based SSD. The best of both worlds all in one neat little package. Previously you might buy these two devices seperately, 1 average sized Flash drive and 1 spacious Hard drive. Then you would configure the Flash Drive as your System boot drive and then using some kind of alias/shortcut trick have the Hard drive as your user folder to hold videos, pictures, etc. This has caused some very conservative types to sit out and wait for even bigger Flash drives hoping to store everything on one logical volume. But what they really want is a hybrid of big storage and fast speed and that according to the press release is what the OCZ Hybrid Drive delivers. With a SandForce drive controller and two drives the whole architecture is hidden away along with the caching algorithm that moves files between the flash and hard drive storage areas. To the end user, they see but one big Hard drive (albeit installed in one of their PCI card slots), but experience the faster bootup times, faster application loading times. I’m seriously considering adding one of these devices into a home computer we have and migrating the bootdrive and user home directories over to that, using the current hard drives as the Windows backup device. I think that would be a pretty robust setup and could accommodate a lot of future growth and expansion.