Blog

  • I’ve been trying to find this movie for a while, now I got the link.

    hightechhistory's avatarHigh Tech History

    This past June, fellow High Tech History writer Gil Press wrote an entry  in recognition of International Business Machines’ centennial. In the interim, I came across a documentary created by noted filmmaker Errol Morris  for IBM that draws on the experiences of, among others, the corporation’s former technicians and executives to tell a thirty-minute story of some of IBM’s more notable achievements in computing over the last one hundred years.

    In this instance, Morris’ collaboration with noted composer Philip Glass resulted in an expertly produced, sentimental (occasionally overly so), and informative oral history. Morris and Glass previously worked together on the 2003 Oscar-winning documentary, The Fog of War: Eleven Lessons from the Life of Robert S. McNamara. And this was not the first time that Morris had been commissioned to work for IBM. In 1999 he filmed a short documentary intended to screen at an in-house conference for IBM employees. The conference never took place…

    View original post 535 more words

  • erikduval's avatarErik Duval's Weblog

    Wolfgang Greller made some interesting comments on his blog about an interview with me on Learning Analytics.

    I’m a bit puzzled by his remark that

    Duval being a computer scientist strongly believes in the power of data and the revelations it holds.

    Actually, I am not sure what would be the alternative to ‘believing in data’ – not believing in data? Isn’t confronting theories with data one of the core activities of any science?

    For me, there lies one of the most important promises of learning analytics: as a research domain, technology enhanced learning is too much a field of opinions – maybe learning analytics can help to turn the field into more of a collection of experimentally validated theories? Into more of a science?

    I’m not sure I understand the problem that Wolfgang seems to have with data. Of course, a real issue is selecting what kind of data…

    View original post 167 more words

  • Pioneering Campus CIOs Say Necessity Drives Shift to Cloud — Campus Technology

    Cloud computing stack showing infrastructure, ...
    Image via Wikipedia
    The Seal of the United States Federal Bureau o...
    Image via Wikipedia

    Pioneering Campus CIOs Say Necessity Drives Shift to Cloud By David Raths 10/25/11
    A recent survey of campus IT leaders suggested that most colleges and universities are still gun shy about cloud computing. Yet if attendance at conference meetings is any gauge, there is widespread curiosity about the experience of early adopters.

    via Pioneering Campus CIOs Say Necessity Drives Shift to Cloud — Campus Technology.

    The number of U.S. government requests for data on Google users for use in criminal investigations rose 29 percent in the last six months, according to data released by the search giant Monday. By Ryan Singel Email Author October 25, 2011  |  11:07 am

    via U.S. Requests for Google User Data Spike 29 Percent in Six Months | Threat Level | Wired.com.

    I think the old adage about Greeks Bearing Gifts applies here, or should I say geeks bearing gifts?! Cloud computing as it applies to desktop productivity apps is a double-edged sword as it is commonly practiced in Higher Ed. What was once seen as a major outsourcing/cost-savings dumping Student email off to willing companies like Google is now seen as the ‘wave of the future’ where desktops give way to mobile devices and Web apps slowly evolve into just apps, independent of the websites where they actually run. However beware dear reader as those contracts you sign and the myriad terms of your Service Level Agreement are hammered out, rules can change. And by that I mean, the rule regarding simple things like, National Security Letters sent by the FBI to Google Inc. for the email of an individual you have outsourced your Student Email services to. I ask here to anyone who is reading, does Google under the terms of its contracts with a Higher Ed I.T. unit have to notify anyone that they are sending 6 months or more of GMail messages to the FBI to read at their leisure?

    I’m reminded somewhat of the bad old days under Napster, where Higher Ed eventually started to receive mass quantities of DMCA (Digital Millenium Copyright Act) notices about infringing music files being shared over their data networks. In cases like that, each school could set its own policy. Some were very neutral asking for proof of the infringement, and in some cases ignoring the request because the point of origin was not the copyright holder but a 3rd party clearance group who spammed every University on behalf of the Recording Industry (RIAA). The beauty of the democracy and law for DMCA requests meant each institution could now decide how best to pursue the matter and do so at their own discretion. Not so with government requests for electronic data, oh no. For a national security letter, the University doesn’t even have to notify the individual that their data is being shared to the FBI. Nor can the tell anyone, it’s a complete, air-tight gag order placed on the service provider whomever they may be (Library, Student Records, or University I.T.) Whither the case of the outsourced University I.T. then?

    In the haste to save money the SLAs Universities across the U.S. have made with big name providers like Google have made everyone subject to the rules governing that provider. Google is not an institution of Higher Ed. They are for profit, a U.S. corporation subject to all the laws governing any company chartered in the U.S. And they unlike Higher Ed do not have the interest, much less the luxury of responding to National Security letters in their own way, or at their own discretion. In fact, they don’t have to tell anyone what their actually policy as such is that’s private only for their top level officers and their Legal Dept. to know. So in the mad rush to create the omnipresent future of ‘Cloud Computing’ one must ask themselves, are we really only making the surveillance by Government easier? Are we really understanding what we give up when we decide to adopt applications/data hosted in the Cloud? Sure, yes, privacy as then CEO of Sun Microsystems Scott McNeely once boasted is ‘You have zero privacy anyway, get over it‘. But do we really understand the full implication of what this means?

    I dare say it’s the folks focusing on the bottom line who are signing away our rights without us ever getting a say. And while I understand that even non-profit Higher Ed is run like a business, they are the last folks who should be participating in this construction of the Data Cloud Surveillance State we find ourselves in now. If we cannot choose for ourselves, what then do we have left for ourselves? Our thoughts? Our feelings? Sorry no, those too are now housed in the cloud by the likes of Facebook Timeline lifesteaming. Now everyone knows everything and you have surrendered it all just for the sake of catching up with some old College and High School friends. That’s too high a price to pay I think. So, to the degree possible I am unwilling to just let this ‘freedom’ ebb through a process of adopting this new App or that new platform. The new New Thing maybe cool, but there’s a whole lotta string attached.

  • AnandTech – ARM & Cadence Tape Out 20nm Cortex A15 Test Chip

    Wordmark of Cadence Design Systems
    Image via Wikipedia

    The test chip will be fabbed at TSMC on its next-generation 20nm process, a full node reduction ~50% transistor scaling over its 28nm process. With the first 28nm ARM based products due out from TSMC in 2012, this 20nm tape-out announcement is an important milestone but were still around two years away from productization. 

    via AnandTech – ARM & Cadence Tape Out 20nm Cortex A15 Test Chip.

    Data Centre
    Image by Route79 via Flickr (Now that's scary isn't it! Boo!)

    Happy Halloween! And like most years there are some tricks up ARM’s sleeve announced this past week along with some partnerships that should make things trickier for the Engineers trying to equip ever more energy efficient and dense Data Centers the world over.

    It’s been announced, the ARM15 is coming to market some time in the future. Albeit a ways off yet. And it’s going to be using a really narrow design rule to insure it’s as low power as it possibly can be. I know manufacturers of the massively parallel compute cloud in a box will be seeking out this chip as soon as samples can arrive. The 64bit version of ARM15 is the real potential jewel in the crown for Calxeda who is attempting to balance low power and 64bit performance in the same design.

    I can’t wait to see the first benchmarks of these chips apart from the benchmarks from the first shipping product Calxeda can get out with the ARM15 x64. Also note just this week Hewlett-Packard has signed on to sell designs by Calxeda in forth coming servers targeted at Energy Efficient Data Center build-outs. So more news to come regarding that partnership and you can read it right here @ Carpetbomberz.com

  • MIT boffin: Salted disks hold SIX TIMES more data • The Register

    Close-up of a hard disk head resting on a disk...
    Image via Wikipedia

    This method shows, Yang says, that “bits can be patterned more densely together by reducing the number of processing steps”. The HDD industry will be fascinated to understand how BPM drives can be made at a perhaps lower-than-anticipated cost.

    via MIT boffin: Salted disks hold SIX TIMES more data • The Register.

    Moore’s Law applies to semi-conductors built on silicon wafers. And to a lesser extent it has had some application to hard disk drive storage as well. When IBM created is GMR (Giant Magneto-Resistive) read/write head technology and was able to develop it into a shipping product, a real storage arms race began. Densities increased, prices dropped and before you knew it hard drives went from 1Gbyte to 10Gbytes overnight practically speaking. Soon a 30Gbyte drive was the default average size boot and data drive for every shipping PC when just a few years before a 700Mbyte drive was the norm. This was a greater than 10X improvement with the adoption of a new technology.

    I remember a lot of those touted technologies were added on and tacked on at the same time. PRML (Partial Read Maximum Likelihood) and Perpendicular Magnetic Recording  (PMR) too both helped keep the ball rolling in terms of storage density. IBM even did some pretty advanced work layering magnetic layers between magnetically insulating layers (using thin layers of Ruthenium) to help create even stronger magnetic recording media for the newer higher density drives.

    However each new incremental advance has now run a course and the advances in storage technology are slowing down again. But there’s still one shining hope: Bit Patterned-Media (BPM). And in all the speculation about which technology is going to keep the storage density ball rolling, this new announcement is sure to play it’s part. A competing technique using lasers to heat the disk surface before writing data is also being researched and discussed, but is likely to force a lot of storage vendors to agree to make a transition to that technology simultaneously. BPM on the other hand isn’t so different and revolutionary that it must be rolled out en masse simultaneously by each drive vendor to insure everyone is compatible. And better yet BPM maybe a much lower cost and immediate way to increase storage densities without incurring big equipment and manufacturing machine upgrade costs.

    So I’m thinking we’ll be seeing BPM much more quickly and we’ll continue to enjoy the advances in drive density for a little while longer.

  • Intels Plans for New SSDs in 2012 Detailed

    Logo of Intel, Jul 1968 - Dec 2005
    Image via Wikipedia

    Through first quarter of 2012, Intel will be releasing new SSDs: Intel SSD 520 “Cherryville” Series replacement for the Intel SSD 510 Series, Intel SSD 710 “Lyndonville” Series Enterprise HET-MLC SSD replacement for X25-E series, and Intel SSD 720 “Ramsdale” Series PCIe based SSD. In addition, you will be seeing two additional mSATA SSDs codenamed “Hawley Creek” by the end of the fourth quarter 2011.

    via Intels Plans for New SSDs in 2012 Detailed.

    That’s right folks Intel is jumping on the high performance PCIe SSD bandwagon with the Intel SSD 720 in the first quarter of 2012. Don’t know what price they will charge but given quotes and pre-releases of specs it’s going to compete against products from competitors like RamSan, Fusion-io and the top level OCZ PCIe prouct the R4. My best guess is based on pricing for those products it will be in the roughly $10,000+ category with an 8x PCI interface and fully complement of Flash memory (usually over 1TB on this class of PCIe card).

    Knowing that Intel’s got some big engineering resources behind their SSD designs, I’m curious to see how close they can come to the performance statistics quoted in this table here:

    http://www.tomshardware.com/gallery/intel-ssd-leak,0101-296920-0-2-3-1-jpg-.html

    2200 Mbytes/sec of Read throughput and 1100Mbytes/sec of Write throughput. Those are some pretty heft numbers compared to currently shipping products in the upper pro-summer and lower Enterprise Class price category. Hopefully Anandtech will get a shipping or even pre-release version before the end of the year and give it a good torture test. Following Anand Lai Shimpi on his Twitter feed, I’m seeing all kinds of tweets about how a lot of pre-release products from manufacturers off SSDs and PCIe SSDs fail during the benchmarks. Doesn’t bode well for the Quality Control depts. at the manufacturers assembling and testing these products. Especially considering the price premium of these items, it would be much more reassuring if the testing was more rigorous and conservative.

  • AnandTech – Qualcomms New Snapdragon S4: MSM8960 & Krait Architecture Explored

    Qualcomm remains the only active player in the smartphone/tablet space that uses its architecture license to put out custom designs. The benefit to a custom design is typically better power and performance characteristics compared to the more easily synthesizable designs you get directly from ARM. The downside is development time and costs go up tremendously.

    via AnandTech – Qualcomms New Snapdragon S4: MSM8960 & Krait Architecture Explored.

    The snapdragon cpu
    From the Qualcomm Website: Snapdragon

    I’m very curious to see how the different ARM based processors fair against one anther in each successive generation. Especially the move to ARM-15 (x64) none of which will see a quick implementation on a handheld mobile device. ARM-15 is a long ways off yet, but it appears in spite of the next big thing in ARM designed cores, there’s a ton of incremental improvements and evolutionary progress being made on current generation ARM cores. ARM-8 and ARM-9 have a lot of life in them for the foreseeable future including die shrinks that allow either faster clock speeds or constant clock speeds and lower power drain and lower Thermal Design Point (TDP).

    Apple’s also going steadily towards the die shrink in order to cement current gains made in it’s A5 chip design too. Taiwan Manfucturing Semi-Conductor (TMSC) is the biggest partner in this direction and is attempting to run the next iteration of Apple mobile processors on its state of the art 22 nanometer design rule process.

  • Rise of the Multi-Core Mesh Munchkins: Adapteva Announces New Epiphany Processor – HotHardware

    Epiphany Processor from Adapteva
    Epiphany Block Diagram

    Many-core processors are apparently the new black for 2011. Intel continues to work on both its single chip cloud computer and Knights Corner, Tilera made headlines earlier this year, and now a new company, Adapteva, has announced its own entry into the field.

    via Rise of the Multi-Core Mesh Munchkins: Adapteva Announces New Epiphany Processor – HotHardware.

    A competitor to Tilera and Intel’s MIC  has entered the field as a mobile processor, co-processor. Given the volatile nature of chip architectures in the mobile market, this is going to be hard sell for some device designers I think. I say this as each new generation of Mobile CPU gets more and more integrated features as each new die shrink allows more embedded functions. The Graphic processors are now being embedded wholesale into every smartphone cpu. Other features like memory controllers and baseband processors will now doubt soon be added to the list as well. If Adapteva wants any traction at all in the Mobile market they will need to further their development of the Epiphany into a synthesizable core that can be added to an existing cpu (most likely a design from ARM). Otherwise trying to stick with being a separate auxiliary chip is going to hamper and severely limit the potential applications of their product.

    Witness the integration of the graphics processing unit. Not long ago it was a way to differentiate a phone but required it to be integrated into the motherboard design along with any of the power requirements it required. In a very short time, after GPUs were added to cell phones they were integrated into the CPU chip sandwich to help keep manufacturing and power budget in check. If the Epiphany had been introduced around the golden age of discrete chips on cell phone motherboards, it would make a lot more sense. But now you need to be embedded, integrated and 100% ARM compatible with a fully baked developer toolkit. Otherwise, it’s all uphill from the product introduction forward. If there’s an application for the Ephiphany co-processor I hope they concentrate more on the tools to fully use the device and develop a niche right out of the gate rather than attempt to get some big name but small scale wins on individual devices from the Android market. That seems like the most likely candidates for shipping product right now.

  • The 20 Most Notable Engineers of All Time | High Tech History

    John Ambrose Fleming
    Image via Wikipedia

    No. 7.  John Ambrose Fleming: Sir John Ambrose Fleming is the inventor of the first vacuum tube. His engineering feat is known as the precursor to electronics — even though the U.S. Supreme Court invalidated his patent.

    via The 20 Most Notable Engineers of All Time | High Tech History.

    Until I read this list, I didn’t know who invented the vacuum tube. I did however understand the incredible importance of the vacuum tube though. Especially as it applied to the early computer industry. After that the transistor took over. But oh that early time of designing circuits and working on logic! Without any of those historical antecedents we would not have the computers of today. The necessity of switching voltages from high to low is the only way to mimic the registers in an adding machine, spinning, counting off one digit at a time. Wiring those tubes up into circuits and creating logic with them was the next big leap in intuition.

    Without the vacuum tube there would be no electrical engineering, no electronics industry and no devices like wireless telegraphs, wireless radio, etc. Everything hinged on this invention. So cheers to John Ambrose Fleming and the vaccuum tube. Being able to apply some kind of useful purpose to what would have been thought of as a laboratory curiosity, a magic toy to manipulate cathode rays. But somehow Fleming was able to see an application of this technology to a useful end and the rest they say is history.

  • Birck Nanotechnology Center – Ferroelectric RAM

    Schematic drawing of original designs of DRAM ...
    Image via Wikipedia

    The FeTRAMs are similar to state-of-the-art ferroelectric random access memories, FeRAMs, which are in commercial use but represent a relatively small part of the overall semiconductor market. Both use ferroelectric material to store information in a nonvolatile fashion, but unlike FeRAMS, the new technology allows for nondestructive readout, meaning information can be read without losing it.

    via Discovery Park – Birck Nanotechnology Center – News.

    I’m always pleasantly surprised to read that work is still being done on alternate materials for Random Access Memory (RAM). I was following closely developments in the category of ferroelectric RAM by folks like Samsung and HP. Very few of these products promised enough return on investment to be developed into products. And some notable efforts by big manufacturers were abandoned altogether.

    If this research effort can be licensed to a big chip manufacturer and not turned into a form of patent trolling ammunition I would feel the effort was not wasted. I think too often most recently these patented technologies are not used as a means of advancing the art of computer technology. Instead they are a portfolio to a litigator seeking rent on the patented technology.

    Due to the frequency of abandoned projects in the alternative DRAM technology category, I’m hoping the compatibility of this chip’s manufacturing process with existing chip making technology will be a big step forward. A paradigm shifting technology like magnetic RAM might just push us to the next big mountain top of power conservation, performance and capability that the CPU enjoyed from 1969 to roughly 2005 when chip speeds began to plateau.