Categories
blogroll science & technology technology

What went wrong with the Hubble Space Telescope and what managers can learn from it – leadership, collaboration – IT Services – Techworld

Astronauts Steven L. Smith, and John M. Grunsf...
Astronauts Steven L. Smith, and John M. Grunsfeld, appear as small figures in this wide scene photographed during extravehicular activity (EVA). On this space walk they are replacing gyroscopes, contained in rate sensor units (RSU), inside the Hubble Space Telescope. A wide expanse of waters, partially covered by clouds, provides the backdrop for the photograph. (Photo credit: Wikipedia)

“Theres a bunch of research I’ve come across in this work, where people say that the social context is a 78-80 per cent determinant of performance; individual abilities are 10 per cent. So why do we make this mistake? Because we spend all of these years in higher education being trained that its about individual abilities.”

via What went wrong with the Hubble Space Telescope and what managers can learn from it – leadership, collaboration – IT Services – Techworld.

Former NASA Director Charlie Pellerin is now a consultant on how to prevent failures on teams charged with carrying out large scale, technically impossible projects. He’s had to face two biggies while at NASA the Challenger explosion and subsequent to that and more directly the Hubble Space Telescope mirror failure. In that time he tried to really look at the source of the failures rather than just let the investigative committees do all the work. And what he’s decided is that culture is a bigger part of the chain of failure than technical ability.

Which leads me to ask the question how often does this happen in other circumstances as well? I’ve seen the PBS NOVA program on the 747 runway collision in Tenerife back in 1977. At that time the KLM Airliner more or less start taking off before the Pan American 747 had taxied off of the runway. In spite of all the protocols and controls in place to manage planes on the ground the captain of the KLM 747 made the decision to take-off not once, but TWICE! The first time it happened his co-pilot corrected him saying they didn’t have clearance from the tower. The second time, the co-pilot and navigator both sat back and let the whole thing unfold, to their own detriment. No one survived in that KLM 747 after it crashed into the Pan American 747 and bounced down the runway. In the article I link to above there’s an anecdote that Charlie Pellerin relates about a number of Korean Air crashes that occurred in the 1990s. Similarly it was the cockpit ‘culture’ that was leading to the bad decisions being made and resulting in the loss of the airplane and passengers on board.

Some people like to call it ‘top-down’ management, where everyone defers to the person recognized as the person in charge. Worse yet sometimes the person in charge doesn’t always realize this. They go on about their decision making process never once thinking people are restraining themselves holding back questions. The danger is always once this pattern is in place, any mistake by the person in charge gets amplified over time. In Charlie Pellerin’s judgement modern airliners are designed to run by a team who SHARE the responsibilities of conducting the airplane. And while the planes themselves have many safety systems in place to make things run smoothly the assumption is always made by the plane designers of a TEAM. But when you have a hierarchy of people in charge and people that defer to them, the TEAM as such doesn’t exist and you have now broken the primary design principle of the aircraft’s designer. No TEAM, No plane, and there’s many examples that show this not just in the airline accident investigations.

Polishing the Hubble Mirror at Perkin-Elmer

In the case of the Hubble Telescope mirror, things broke down when a simple calibration step was rushed. The sub-contractor in charge of measuring the point of focus on the mirror followed the procedure as given to him. But skipped a step that threw the whole calibration off. The step that he skipped was to simply apply spray paint onto two end caps that would then be placed on to a very delicately measured and finely cut metal rod. The black spray paint was meant to act as a non-reflective surface to expose a very small bit of the rod end to a laser that would measure the distance to the focus point. What happened instead because the whole telescope program was going over budget and was constantly delayed all sub-contractors were pressured to ‘hurry up’. When the guy who was responsible for this calibration step couldn’t find matte black spray paint to put on the end caps he improvised (like a true engineer). He got black electrical tape, wrapped it on the end of the cap, cut a hole with the tip of an Xacto knife and began his calibration step.

But that one detail was what put the whole Hubble Space Telescope in jeopardy. In the rush to get this step done, the Xacto knife nicked a bit off the metal end cap and a small shiny, metal burr was created. Almost too small to see, the burr poke out into the hole cut into the black electrical tape for the laser to shine through. When the engineer calibrated it, the small burr was reflecting light back to the sensors measuring the distance. The burr was only 1mm off the polished surface of the calibration rod. And that 1mm distance was registered as ‘in spec’ and the full distance to the focus point had 1 mm added to it. Considering how accurate a mirror has to be for telescope work, and how long the Hubble mirror spent being ground and polished, 1mm would be equal to 1 mile in the real world. And this was the source of the ‘blur’ in the Hubble Telescope when it was first turned on after being deployed by the Space Shuttle. The culture of hurry up and get it done, we’re behind schedule jeopardized a billion dollar space telescope mission that was over budget and way behind schedule.

All these cautionary tales reiterate the over-arching theme of big failures are not technical, no. These failures are cultural and everyone has the capacity to do better every chance they get. I encourage anyone and everyone reading this article to read the original interview with Charlie Pellerin as he’s got a lot to say on this subject and some fixes that can be applied to avoid the fire next time. Because statistically speaking there will always be a next time.

KLM's 747-406 PH-BFW - nose
KLM’s 747-406 PH-BFW – nose (Photo credit: caribb)
Categories
computers mobile science & technology technology

ARM Pitches Tri-gate Transistors for 20nm and Beyond

English: I am the author of this image.
Image via Wikipedia

. . . 20 nm may represent an inflection point in which it will be necessary to transition from a metal-oxide semiconductor field-effect transistor MOSFET to Fin-Shaped Field Effect Transistors FinFET or 3D transistors, which Intel refers to as tri-gate designs that are set to debut with the companys 22 nm Ivy Bridge product generation.

via ARM Pitches Tri-gate Transistors for 20nm and Beyond.

Three Dimensional transistors in the news again. Previously Intel announced they were adopting a new design for their next generation next smaller design rule for the Ivy Bridge generation Intel CPUs. Now ARM is also doing work to integrate similar technology into their ARM cpu cores as well. No doubt in order to lower Thermal Design Point and maintain clock speed as well are both driving this move to refine and narrow the design rules for the ARM architecture. Knowing Intel is still the top research and development outfit for silicon semi-conductors would give pause to anyone directly competing with them, but ARM is king of the low power semi-conductor and keeping pace with Intel’s design rules is an absolute necessity.

I don’t know how quickly ARM is going to be able to get a licensee to jump onboard and adopt the new design. Hopefully a large operation like Samsung can take this on and get the chip into it’s design, development, production lines at a chip fabrication facility as soon as possible. Likewise other contract manufacturers like Taiwan Semiconductor Manufacturing Company (TSMC) should also try to get this chip into their facilities quickly too. That way the cell-phone and tablet markets can benefit too as they use a lot of ARM licensed cpu cores and similar intellectual property in their shipping products. And my interest is not so much invested in the competition between Intel and ARM for low power computing but more the overall performance of any single ARM design once it’s been in production for a while and optimized the way Apple designs its custom CPUs using ARM licensed cpu cores. The single most outstanding achievement of Apple in their design and production of the iPad is the battery charge duration of 10 hours. Which to date, is an achievement that has not been beaten, even by other manufacturers and products who also license ARM intellectual property. So if  the ARM design is good and can be validated and proto-typed with useful yields quickly, Apple will no doubt be the first to benefit, and by way of Apple so will the consumer (hopefully).

Schematic view (L) and SEM view (R) of Intel t...
Image via Wikipedia
Categories
media science & technology technology

MIT boffin: Salted disks hold SIX TIMES more data • The Register

Close-up of a hard disk head resting on a disk...
Image via Wikipedia

This method shows, Yang says, that “bits can be patterned more densely together by reducing the number of processing steps”. The HDD industry will be fascinated to understand how BPM drives can be made at a perhaps lower-than-anticipated cost.

via MIT boffin: Salted disks hold SIX TIMES more data • The Register.

Moore’s Law applies to semi-conductors built on silicon wafers. And to a lesser extent it has had some application to hard disk drive storage as well. When IBM created is GMR (Giant Magneto-Resistive) read/write head technology and was able to develop it into a shipping product, a real storage arms race began. Densities increased, prices dropped and before you knew it hard drives went from 1Gbyte to 10Gbytes overnight practically speaking. Soon a 30Gbyte drive was the default average size boot and data drive for every shipping PC when just a few years before a 700Mbyte drive was the norm. This was a greater than 10X improvement with the adoption of a new technology.

I remember a lot of those touted technologies were added on and tacked on at the same time. PRML (Partial Read Maximum Likelihood) and Perpendicular Magnetic Recording  (PMR) too both helped keep the ball rolling in terms of storage density. IBM even did some pretty advanced work layering magnetic layers between magnetically insulating layers (using thin layers of Ruthenium) to help create even stronger magnetic recording media for the newer higher density drives.

However each new incremental advance has now run a course and the advances in storage technology are slowing down again. But there’s still one shining hope: Bit Patterned-Media (BPM). And in all the speculation about which technology is going to keep the storage density ball rolling, this new announcement is sure to play it’s part. A competing technique using lasers to heat the disk surface before writing data is also being researched and discussed, but is likely to force a lot of storage vendors to agree to make a transition to that technology simultaneously. BPM on the other hand isn’t so different and revolutionary that it must be rolled out en masse simultaneously by each drive vendor to insure everyone is compatible. And better yet BPM maybe a much lower cost and immediate way to increase storage densities without incurring big equipment and manufacturing machine upgrade costs.

So I’m thinking we’ll be seeing BPM much more quickly and we’ll continue to enjoy the advances in drive density for a little while longer.

Categories
computers flash memory science & technology technology

Birck Nanotechnology Center – Ferroelectric RAM

Schematic drawing of original designs of DRAM ...
Image via Wikipedia

The FeTRAMs are similar to state-of-the-art ferroelectric random access memories, FeRAMs, which are in commercial use but represent a relatively small part of the overall semiconductor market. Both use ferroelectric material to store information in a nonvolatile fashion, but unlike FeRAMS, the new technology allows for nondestructive readout, meaning information can be read without losing it.

via Discovery Park – Birck Nanotechnology Center – News.

I’m always pleasantly surprised to read that work is still being done on alternate materials for Random Access Memory (RAM). I was following closely developments in the category of ferroelectric RAM by folks like Samsung and HP. Very few of these products promised enough return on investment to be developed into products. And some notable efforts by big manufacturers were abandoned altogether.

If this research effort can be licensed to a big chip manufacturer and not turned into a form of patent trolling ammunition I would feel the effort was not wasted. I think too often most recently these patented technologies are not used as a means of advancing the art of computer technology. Instead they are a portfolio to a litigator seeking rent on the patented technology.

Due to the frequency of abandoned projects in the alternative DRAM technology category, I’m hoping the compatibility of this chip’s manufacturing process with existing chip making technology will be a big step forward. A paradigm shifting technology like magnetic RAM might just push us to the next big mountain top of power conservation, performance and capability that the CPU enjoyed from 1969 to roughly 2005 when chip speeds began to plateau.

Categories
gpu mobile science & technology

ARM vet: The CPUs future is threatened • The Register

8-inch silicon wafer with multiple intel Penti...
Image via Wikipedia

Harkening back to when he joined ARM, Segars said: “2G, back in the early 90s, was a hard problem. It was solved with a general-purpose processor, DSP, and a bit of control logic, but essentially it was a programmable thing. It was hard then – but by todays standards that was a complete walk in the park.”

He wasn’t merely indulging in “Hey you kids, get off my lawn!” old-guy nostalgia. He had a point to make about increasing silicon complexity – and he had figures to back it up: “A 4G modem,” he said, “which is going to deliver about 100X the bandwidth … is going to be about 500 times more complex than a 2G solution.”

via ARM vet: The CPUs future is threatened • The Register.

A very interesting look a the state of the art in microprocessor manufacturing, The Register talks with one of the principles at ARM, the folks who license their processor designs to almost every cell phone manufacturer worldwide. Looking at the trends in manufacturing, Simon Segars is predicting a more difficult level of sustained performance gains in the near future. Most advancement he feels will be had by integrating more kinds of processing and coordinating the I/O between those processors on the same processor die. Which is kind of what Intel is attempting to do integrating graphics cores, memory controllers and CPU all on one slice of silicon. But the software integration is the trickiest part, and Intel still sees fit to just add more general purpose CPU cores to continue making new sales. Processor clocks stay pretty rigidly near the 3GHz boundary and have not shifted significantly since the end of the Pentium IV era.

Note too, the difficulty of scaling up as well as designing the next gen chips. Referring back to my article from Dec.21,  2010; 450mm wafers (commentary on Electronista article), Intel is the only company rich enough to scale up to the next size of wafer. Every step in the manufacturing process has become so specialized that the motivation to create new devices for manufacture and test just isn’t there because the total number of manufacturers who can scale up to the next largest size of silicon wafer is probably 4 companies worldwide. That’s a measure of how exorbitantly expensive large scale chip manufacturing has become. It seems more and more a plateau is being reached in terms of clock speeds and the size of wafers finished in manufacturing. With these limits, Simon Segars thesis becomes even stronger.

Categories
cloud computers science & technology technology

David May, parallel processing pioneer • reghardware

INMOS T800 Transputer
Image via Wikipedia

The key idea was to create a component that could be scaled from use as a single embedded chip in dedicated devices like a TV set-top box, all the way up to a vast supercomputer built from a huge array of interconnected Transputers.

Connect them up and you had, what was, for its era, a hugely powerful system, able to render Mandelbrot Set images and even do ray tracing in real time – a complex computing task only now coming into the reach of the latest GPUs, but solved by British boffins 30-odd years ago.

via David May, parallel processing pioneer • reghardware.

I remember the Transputer. I remember seeing ISA-based add-on cards for desktop computers back in the early 1980s. They would advertise in the back of the popular computer technology magazines of the day. And while it seemed really mysterious what you could do with a Transputer, the price premium to buy those boards made you realize it must have been pretty magical.

Most recently while I was attending workshop in Open Source software I met a couple form employees of  a famous manufacturer of camera film. In their research labs these guys used to build custom machines using arrays of Transputers to speed up image processing tasks inside the products they were developing. So knowing that there’s even denser architectures using chips like Tilera, Intel Atom and ARM chips absolutely blows them away. The price/performance ratio doesn’t come close.

Software was probably the biggest point off friction in that the tools to integrate the Transputer into the overall design required another level of expertise. That is true to of the General Purpose Graphics Processing Unit (GPGU) that nVidia championed and now markets with its Tesla product line. And the Chinese have created a hybrid supercomputer mating Tesla boards up with commodity cpus. It’s too bad that the economics of designing and producing the Transputer didn’t scale with the time (the way it has for Intel as a comparison). Clock speeds also fell behind too, which allowed general purpose micro-processors to spend the extra clock cycles performing the same calculations only faster. This is also the advantage that RISC chips had until they couldn’t overcome the performance increases designed in by Intel.

Categories
computers science & technology support vague interests

History of Sage

A screenshot of Sagemath working.
Image via Wikipedia

The Sage Project Webpage http://www.sagemath.org/

Sage is mathematical software, very much in the same vein as MATLAB, MAGMA, Maple, and Mathematica. Unlike these systems, every component of Sage is GPL-compatible. The interpretative language of Sage is Python, a mainstream programming language. Use Sage for studying a huge range of mathematics, including algebra, calculus, elementary to very advanced number theory, cryptography, numerical computation, commutative algebra, group theory, combinatorics, graph theory, and exact linear algebra.

Explanation of what Sage does by the original author William Stein 

(Long – roughly 50 minutes)

Original Developer http://wstein.org/ and his history of Sage mathematical software development. Wiki listing http://wiki.sagemath.org/ with a list of participating commiters. Discussion lists for developers: Mostly done through Google Groups with associated RSS feeds. Mercurial Repository (start date Sat Feb 11 01:13:08 2006) Gonzalo Tornaria seems to have loaded the project in at this point. Current List of source code in TRAC with listing of commiters for the most recent release of Sage (4.7).

  • William Stein (wstein) Still very involved based on freqenecy of commits
  • Michael Abshoff (mabs) Ohloh has him ranked second only to William Stein with commits and time on project. He’s now left the project according to the Trac log.
  • Jeroen Demeyer (jdemeyer) commits a lot
  • J.H.Palmieri (palmieri) has done  number of tutorials and documentation he’s on the IRC channel
  • Minh Van Nguyen (nguyenminh2) has done some tutorials,documentation and work Categories module. He also appears to be the sysadmin on the Wiki
  • Mike Hansen (mhansen) Is on the IRC channel irc.freenode.net#sagemath and is a big contributor
  • Robert Bradshaw (robertwb) has done some very recent commits

Changelog for the most recent release (4.7) of Sage. Moderators of irc.freenode.net#sagemath Keshav Kini (who maintains the Ohloh info) & schilly@boxen.math.washington.edu. Big milestone release of version 4.7 with tickets listed here based on modules: Click Here. And the Ohloh listing of top contributors to the project. There’s an active developer and end user community. Workshops are tracked here. Sage Days workshops tend to be hackfests for interested parties. But more importantly Developers can read up on this page, how to get started and what the process is as a Sage developer.

Further questions that need to be considered. Look at the git repository and the developer blogs ask the following questions:

  1. Who approves patches? How many people? (There’s a large number of people responsible for reviewing patches, if I had to guess it could be 12 in total based on the most recent changelog)
  2. Who has commit access? & how many?
  3. Who is involved in the history of the project? (That’s pretty easy to figure out from the Ohloh and Trac websites for Sage)
  4. Who are the principal contributors, and have they changed over time?
  5. Who are the maintainers?
  6. Who is on the front end (user interface) and back end (processing or server side)?
  7. What have been some of the major bugs/problems/issues that have arisen during development? Who is responsible for quality control and bug repair?
  8. How is the project’s participation trending and why? (Seems to have stabilized with a big peak of 41 contribs about 2 years ago, look at Ohloh graph of commits, peak activity was 2009 and 2010 based on Ohloh graph).

Note the period over which the Gource visualization occurs is since 2009, earliest entry in the Mercurial repository I could find was 2005. Sage was already a going concern prior to the Mercurial repository being put on the web. So the simulation doesn’t show the full history of development.

Categories
blogtools cloud media navigation science & technology technology

Goal oriented visualizations? (via Erik Duval’s Weblog)

Charles Minard's 1869 chart showing the losses...
Image via Wikipedia

Visualizations and their efficacy always takes me back to Edward Tufte‘s big hard cover books on Infographics (or Chart Junk when it’s done badly). In terms of this specific category, visualization leading to a goal I think it’s still very much a ‘general case’. But examples are always better than theoretical descriptions of an ideal. So while I don’t have an example to give (which is what Erik Duval really wants) I can at least point to a person who knows how Infographics get misused.

I’m also reminded somewhat of the most recent issue of Wired Magazine where there’s an article on feedback loops. How are goal oriented visualizations different from or better than feedback loops? I’d say that’s an interesting question to investigate further. The primary example given in that story is the radar equipped speed limit sign. It doesn’t tell you the posted speed. It merely tells you how fast you are going and that by itself apart from ticketing and making the speed limit signs more noticeable did more to effect a change in behavior than any other option. So maybe a goal oriented visualization could also benefit from some techniques like feedback loops?

Some of the fine fleur of information visualisation in Europe gathered in Brussels today at the Visualizing Europe meeting. Definitely worth to follow the links of the speakers on the program! Twitter has a good trace of what was discussed. Revisit offers a rather different view on that discussion than your typical twitter timeline. In the Q&A session, Paul Kahn asked the Rather Big Question: how do you choose between different design alterna … Read More

via Erik Duval’s Weblog

Categories
navigation science & technology technology

2WAY Q&A: Layar’s Maarten Lens-FitzGerald on Building a Digital Layer on Top of the World

Image representing Layar as depicted in CrunchBase
Image via CrunchBase

Lens-FitzGerald: I never thought of going into augmented reality, but cyberspace, any form of digital worlds, have always been one of the things I’ve been thinking about since I found out about science fiction. One of the first books I read of the cyber punk genre was Bruce Sterling‘s “Mirror Shades.” Mirror shades, meaning, of course, AR goggles. And that book came out in 1988 and ever since, this was my world.

via 2WAY Q&A: Layar’s Maarten Lens-FitzGerald on Building a Digital Layer on Top of the World.

An interview with the man that who created the most significant Augmented Reality (AR) application on handheld devices Layar. In the time since the first releases on smartphones like the Android in Europe, Layar has branched out to cover more of the OSes available on hand held devices. The interest I think has cooled somewhat on AR as social network and location has seemed to rule the day. And I would argue even location isn’t as fiery hot as it was at the beginning. But Facebook is still here with a vengeance. So wither the market for AR? What’s next you wonder, well it seems Qualcomm today has announced it’s very own AR Toolkit to help jump start the developer market more useful, nay killer AR apps. Stay tuned.

Categories
cloud computers data center science & technology technology

Tilera preps 100-core chips for network gear • The Register

One Blue Gene/L node board
Image via Wikipedia

Upstart multicore chip maker Tilera is using the Interop networking trade show as the coming out party for its long-awaited Tile-Gx series of processors, which top out at 100 cores on a single die.

via Tilera preps 100-core chips for network gear • The Register.

A further update on Tilera’s product launches as the old Interop tradeshow for network switch and infrastructure vendors is held in Las Vegas. They have tweaked the chip packaging of their cpus and now are going to market different cpus to different industries. This family of Tilera chips is called the 8000 series and will be followed by a next generation of 3000 and 5000 series chips. Projections are by the time the Tilera 3000 series is released the density of the chips will be sufficient to pack upwards of 20,000 cpu cores of Tilera chips in a single 42 unit tall, 19 inch wide server rack. with a future revision possibly doubling that number of cores to 40,000. That road map is very agressive but promising and shows that there is lots of scaling possible with the Tilera product over time. Hopefully these plans will lead to some big customers signing up to use Tilera in shipping product in the immediate and near future.

What I’m most interested in knowing is how does the Qanta server currently shipping that uses the Tilera cpu benchmark compared to an Intel Atom based or ARM based server on a generic webserver benchmark. While white papers and press releases have made regular appearances on the technolog weblogs, very few have attempted to get sample product and run it through the paces. I suspect, and cannot confirm that anyone who is a potential customer are given Non-disclosure Agreements and shipping samples to test in their data centers before making any big purchases. I also suspect that as is often the case the applications for these low power massively parallel dense servers is very narrow. Not unlike that for a super computer. IBM‘s Cell Processor that powers the Blue Gene super computers is essentially a PowerPC architecture with some extra optimizations and streamlining to make it run very specific workloads and algorithms faster. In a super computing environment you really need to tune your software to get the most out of the huge up front investment in the ‘iron’ that you got from the manufacturer. There’s not a lot of value add available in that scientific and super computing environment. You more or less roll your own solution, or beg, borrow or steal it from a colleague at another institution using the same architecture as you. So the Quanta S2Q server using the Tilera chip is similarly likely to be a one off or niche product, but a very valuable one to those who  purchase it. Tilera will need a software partner to really pump up the volumes of shipping product if they expect a wider market for their chips.

But using a Tilera processor in a network switch or a ‘security’ device or some other inspection engine might prove very lucrative. I’m thinking of your typical warrantless wire-tapping application like the NSA‘s attempt to scoop up and analyze all the internet traffic at large carriers around the U.S. Analyzing data traffic in real time prevents folks like NSA from capturing and having to move around large volumes of useless data in order to have it analyzed at a central location. Instead localized computing nodes can do the initial inspection in realtime keying on phrases, words, numbers, etc. which then trigger the capturing process and send the tagged data back to NSA for further analysis. Doing that in parallel with a 100 core CPU would be very advantageous in that a much smaller footprint would be required in the secret closets NSA maintains at those big data carriers operations centers. Smaller racks, less power makes for a much less obvious presence in the data center.