Category: blogroll

This is what I subscribe to myself

  • Cargo-culting [managers are awesome / managers are cool when they’re part of your team] (tecznotes|Mike Migurski)

    Cargo-culting [managers are awesome / managers are cool when they’re part of your team] (tecznotes|Mike Migurski)

    English: Code for America Logo
    English: Code for America Logo (Photo credit: Wikipedia)

    This is incidentally what’s so fascinating about the government technology position I’m in at Code for America. I believe that we’re in the midst of a shift in power from abusive tech vendor relationships to something driven by a city’s own digital capabilities. The amazing thing about GOV.UK is that a government has decided it has the know-how to hire its own team of designers and developers, and exercised its authority. That it’s a cost-saving measure is beside the point. It’s the change I want to see in the world: for governments large and small to stop copy-pasting RFP line items and cargo-culting tech trends (including the OMFG Ur On Github trend) and start thinking for themselves about their relationship with digital communication.

    via managers are awesome / managers are cool when they’re part of your team (tecznotes).

    My apologies to the original article’s author Mike Migurski. He was only mentioning cargo-culting in passing while he developed the greater thesis of different styles of managers. But the term cargo-culting was just too good to pass up because it’s so descriptive and so critical as to question the fundamental beliefs and arguments people make for wanting some new, New thing.

    Cargo-culting. Yeah baby. Now that’s what I’m talking about. I liken this to “fashion” and trends coming and going. For instance where I work digital signage is the must have technology that everyone is begging for. Giant displays with capacitive touch capability, like 70″ iPads strapped motionless, monolithically to a wall. That’s progress. And better yet when they are unattended not being used they are digital advertising, yay! We win! It’s a win-win-win situation.

    Sadly the same is true in other areas that indirectly affect where I work. Trends in Instructional Technology follow cargo-culting trends like flipping the classroom. Again people latch onto something and they have to have it regardless of the results or the benefits. None of the outcomes really enter into the decision to acquire the “things” people want. Flipping a classroom is a non-trivial task in that first you have to restructure how you teach the course. That’s a pretty steep requirement alone, but the follow-on item is to then record all your lectures in advance of the class meetings where you will then work with students to find the gaps in their knowledge. Nobody does the first part, or rarely do it because what they really want is the seemingly less difficult task they can delegate. Order up someone to record all my lectures, THEN I’ll flip my classroom. It’s a recipe for wasted effort and potential disaster.

    Don’t let yourself fall victim to cargo-culting in the workplace. Know the difference between that which is new and that which is useful. Everyone will benefit from this when you can at least cast a hairy eye-ball at the new, new thing and ask simply, Why? Don’t settle for an Enron-like “Ask Why”, no. Keep working at the fundamental assumptions and arguments, justifications and rationalizations for wanting the New, new thing. If it’s valid, worthy and beneficial it will stand up to the questioning. Otherwise it will dodge, skirt, shirk, bob and weave the questions and try to subvert the process of review (accelerated, fast-tracked).

    Enhanced by Zemanta
  • DDR4 Heir-Apparent Makes Progress | EE Times

    The first DDR4 memory module was manufactured ...
    The first DDR4 memory module was manufactured by Samsung and announced in January 2011. (Photo credit: Wikipedia)

    The current paradigm has become increasingly complex, said Black, and HMC is a significant shift. It uses a vertical conduit called through-silicon via (TSV) that electrically connects a stack of individual chips to combine high-performance logic with DRAM die. Essentially, the memory modules are structured like a cube instead of being placed flat on a motherboard. This allows the technology to deliver 15 times the performance of DDR3 at only 30% of the power consumption.

    via DDR4 Heir-Apparent Makes Progress | EE Times.

    Even though DDR4 memory modules have been around in quantity for a short time, people are resistant to change. And the need for speed, whether it’s SSD’s stymied by SATA-2 data throughput or being married to DDR4 ram modules, is still pretty constant. But many manufacturers and analysts wonder aloud, “isn’t this speed good enough?”. That is true to an extent, the current OSes and chipset/motherboard manufacturers are perfectly happy cranking out product supporting the current state of the art. But know one wants to be the first to continue to push the ball of compute speed down the field. At least this industry group is attempting to get a plan in place for the next gen DDR memory modules. With any luck this spec will continue to evolve and sampled products will be sent ’round for everyone to review.

    Given changes/advances in the storage and CPUs (PCIe SSDs, and 15 core Xeons), eventually a wall will be hit in compute per watt or raw I/O. Desktops will eventually benefit from any speed increases, but it will take time. We won’t see 10% better with each generation of hardware. Prices will need to come down before any of the mainstream consumer goods manufacturers adopt these technologies. But as previous articles have stated the “time to idle” measurement (which laptops and mobile devices strive to achieve) might be reason enough for the tablet or laptop manufacturers to push the state of the art and adopt these technologies faster than desktops.

    Enhanced by Zemanta
  • UW Researchers Create World’s Thinnest LED | EE Times

    Boron nitride
    Boron nitride (Photo credit: Wikipedia)

     

    The researchers harvested single sheets of tungsten selenide (WSe2) using adhesive tape, a technique invented for the production of graphene. They used a support and dielectric layer of boron nitride on a base of silicon dioxide on silicon, to come up with the thinnest possible LED.

    via UW Researchers Create World’s Thinnest LED | EE Times.

     

    Wow, it seems like the current research in graphene has spawned at least one other possible application, using adhesive tape to create thin layers of homogeneous materials. This time it’s a liquid crystal material with possible applications in thin/flexible LCD displays. As the article says until now Organic LED (OLED) has been the material of choice for thin and even flexible displays. It’s also reassuring MIT was able to publish some similar work in the same edition of Nature magazine. Hopefully this will spur some other researchers to put some money and people on pushing this further.

     

    With all early announcements like this in a fully vetted, edited science journal, we won’t see the products derived from this new technology very soon. However, hope spring eternal for me, and I know just like with OLED, eventually if this can be further researched and it’s found to be superior in cost/performance, it will compete in the market place. I will say the steps in fabrication the researchers used are pretty novel and show some amount of creativity to quickly produce a workable thin film without inordinately expensive fabrication equipment. I thinking about specifically the epitaxial electron beam devices folks have used for nano-material research. Like a 3D printer for atoms these devices are a must-have for many electronics engineering and materials researchers. And they are notoriously slow (just like 3D printers) and expensive for each finished job (also similar to 3D printers). The graphene approach to manufacturing devices for research started with making strands of graphite filaments by firing a laser at a highly purified block of carbon, until after so many shots, eventually you might get a shard of a graphene sheet showing up. Using adhesive tape to “shear” a very pure layer of graphite into a graphene sheet, that was the lightning bolt. Simple adhesive tape could get a sufficiently homogeneous and workable layer of graphene to do real work. I feel like there’s a similar approach or affinity at work here for the researchers who used the same technique to make their tungsten selenide thin films for their thin LED displays.

     

    English: Adhesive tape
    English: Adhesive tape (Photo credit: Wikipedia)

     

     

     

    Enhanced by Zemanta
  • AnandTech | Testing SATA Express And Why We Need Faster SSDs

    PCIe- und PCI-Slots im Vergleich
    PCIe- und PCI-Slots im Vergleich (Photo credit: Wikipedia)

    Even a PCIe 2.0 x2 link offers about a 40% increase in maximum throughput over SATA 6Gbps. Like most interfaces, PCIe 2.0 isn’t 100% efficient and based on our internal tests the bandwidth efficiency is around 78-79%, so in the real world you should expect to get ~780MB/s out of a PCIe 2.0 x2 link, but remember that SATA 6Gbps isn’t 100% either (around 515MB/s is the typical maximum we see). The currently available PCIe SSD controller designs are all 2.0 based but we should start to see some PCIe 3.0 drives next year. We don’t have efficiency numbers for 3.0 yet but I would expect to see nearly twice the bandwidth of 2.0, making +1GB/s a norm.

    via AnandTech | Testing SATA Express And Why We Need Faster SSDs.

    As I’ve watched the SSD market slowly grow and bloom it does seem as though the rate at which big changes occur has slowed. The SATA controllers on the drives themselves were kicked up a notch as the transition from SATA-1 to SATA-2 gave us consistent 500MB/sec read/write speeds. And that has stayed stable forever due to the inherent limit of SATA-2. I had been watching very closely developments in PCIe based SSDs but the prices were  always artificially high due to the market for these devices being data centers. Proof positive of this is Fusion-io catered mostly to two big purchasers of their product, Facebook and Apple. Subsequently their prices always put them in the enterprise level $15K for one PCIe slot device (at any size/density of storage).

    Apple has come to the rescue in every sense of the word by adopting PCIe SSDs as the base level SSD for their portable computers. Starting last Summer 2013 Apple started released Mac Book Pro laptops with PCIe SSDs and then eventually started designing them into the Mac Book Air as well. The last step was to fully adopt it in their desktop Mac Pro (which has been slow to hit the market). The performance of the PCIe SSD in the Mac Pro as compared to any other shipping computer is the highest for a consumer level product. As the Mac gains some market share for all computers being shipped, Mac buyers are gaining more speed from their SSD as well.

    So what further plans are in the works for the REST of the industry? Well SATA-express seems to be a way forward for the 90% of the market still buying Windows PCs. And it’s a new standard being put forth by the SATA-IO standards committee. With any luck the enthusiast market motherboard manufacturers will adopt it as fast as it passes the committees, and we’ll see an Anandtech or Tom’s Hardware guide review doing a real benchmark and analysis of how well it matches up against the previous generation hardware.

    Enhanced by Zemanta
  • The technical aspects of privacy – O’Reilly Radar

    Image representing Edward Snowden as depicted ...
    Image via CrunchBase

    The first of three public workshops kicked off a conversation with the federal government on data privacy in the US.

    by Andy Oram | @praxagora

    via The technical aspects of privacy – O’Reilly Radar.

    Interesting topic covering a wide range of issues. I’m so happy MIT sees fit to host a set of workshops on this and keep the pressure up. But as Andy Oram writes, the whole discussion at MIT was circumscribed by the notion that privacy as such doesn’t exist (an old axiom from ex-CEO of Sun Microsystems, Scott McNealy).

    No one at that MIT meeting tried to advocate for users managing their own privacy. Andy Oram mentions Vendor Relationship Management movement (thanks to Doc Searls and his Clue-Train Manifesto) as one mechanism for individuals to pick and choose what info and what degree the info is shared out. People remain willfully clueless or ignorant of VRM as an option when it comes to privacy. The shades and granularity of VRM are far more nuanced than the bifurcated/binary debate of Privacy over Security. and it’s sad this held true for the MIT meet-up as well.

    Jon Podesta’s call-in to the conference mentioned an existing set of rules for electronic data privacy, data back to the early 1970s and the fear that mainframe computers “knew too much” about private citizens known as Fair Information Practices:  http://epic.org/privacy/consumer/code_fair_info.html (Thanks to Electronic Privacy Information Center for hosting this page). These issues seem to always exist but in different forms at earlier times. These are not new, they are old. But each time there’s  a debate, we start all over like it hasn’t ever existed and it has never been addressed. If the Fair Information Practices rules are law, then all the case history and precedents set by those cases STILL apply to NSA and government surveillance.

    I did learn one new term from reading about the conference at MIT, Differential Security. Apparently it’s very timely and some research work is being done in this category. Mostly it applies to datasets and other similar big data that needs to be analyzed but without uniquely identifying an individual in the dataset. You want to find out efficacy of a drug, without spilling the beans that someone has a “prior condition”. That’s the sum effect of implementing differential privacy. You get the query out of the dataset, but you never once know all the fields of the people that make up that query. That sounds like a step in the right direction and should honestly apply to Phone and Internet company records as well. Just because you collect the data, doesn’t mean you should be able to free-wheel through it and do whatever you want. If you’re mining, you should only get the net result of the query rather than snoop through all the fields for each individual. That to me is the true meaning of differential security.

    Enhanced by Zemanta
  • Virtual Reality | Oculus Rift – Consumer Reports

    Oculus Intel
    Oculus Intel (Photo credit: .michael.newman.)

    Imagine being able to immerse yourself in another world, without the limitations of a TV or movie screen. Virtual reality has been a dream for years, but judging by current trends, it may not be just a dream for much longer.

    via Virtual Reality | Oculus Rift – Consumer Reports.

    I won’t claim that when a technology gets written up in Consumer Reports it has “jumped the shark”, no. Instead I would rather give Consumer Reports kudos for keeping tabs on others writing up and lauding the Oculus Rift VR headset. The specifications of this device continue to improve even before it is hitting the market. Hopes are still high for the prices to be reasonable (really it needs to cost no more than a bottom of the line iPad if there’s any hope of it taking off). Whether the price meets everyone’s expectations is very dependent on the sources for the materials going into the headset, and the single most expensive item are the displays.

    OLED (Organic LED) has been used in mobile phones to great effect, the displays use less power and have somewhat brighter color than backlit LCD panels. But they cost more, and the bigger the display the higher the cost. The developers of Oculus Rift have now pressed the cost maybe a little higher by choosing to go with a very high refresh rate and low latency for the OLED screens in the headset. This came after first wave of user feedback indicating too much lag and subsequent headaches due to the screen not keeping up with head movements (this is a classical downfall of most VR headsets no matter the display technology). However Oculus Rift has continued to work on the lag in the current generation head set and by all accounts it’s nearly ready for public consumption. It’s true, they might have fixed the lag issue and most beta testers to date are complimenting the changes in the hardware. This might be the device that launches a thousand 3D headsets.

    As 3D goes, the market and appeal may be very limited, that historically has been the case. Whether it was used in academia for data visualization or in the military for simulation, 3D Virtual Reality was an expensive niche catering to people with lots of money to spend. Because Oculus Rift was targeted at a lower price range, but with fantastic performance visually speaking who knows what market may follow it’s actual release. So as everyone is whipped up into a frenzy over the final release of the Oculus Rift VR Headset, keep an eye out for this. It’s going to be hot item in limited supply for a while I would bet. And yes, I do think I would love to try one out myself, not just for gaming purposes but for any of the as yet unseen applications it might have (like the next Windows OS or Mac OS?)

    Enhanced by Zemanta
  • The Memory Revolution | Sven Andersson | EE Times

    A 256Kx4 Dynamic RAM chip on an early PC memor...
    A 256Kx4 Dynamic RAM chip on an early PC memory card. (Photo by Ian Wilson) (Photo credit: Wikipedia)

    In almost every kind of electronic equipment we buy today, there is memory in the form of SRAM and/or flash memory. Following Moores law, memories have doubled in size every second year. When Intel introduced the 1103 1Kbit dynamic RAM in 1971, it cost $20. Today, we can buy a 4Gbit SDRAM for the same price.

    via The Memory Revolution | Sven Andersson | EE Times

    Read now, a look back from an Ericsson engineer surveying the use of solid state, chip-based memory in electronic devices. It is always interesting to know how these things start and evolved over time. Advances in RAM design and manufacture are the quintessential example of Moore’s Law even more so than the advances in processors during the same time period. Yes CPUs are cool and very much a foundation upon which everything else rests (especially dynamic ram storage). But remember this Intel didn’t start out making microprocessors, they started out as a dynamic RAM chip company at a time that DRAM was just entering the market. That’s the foundation upon which even Gordon Moore knew the rate at which change was possible with silicon based semiconductor manufacturing.

    Now we’re looking at mobile smartphone processors and System on Chip (SoC) advancing the state of the art. Desktop and server CPUs are making incremental gains but the smartphone is really trailblazing in showing what’s possible. We went from combining the CPU with the memory (so-called 3D memory) and now graphics accelerators (GPU) are in the mix. Multiple cores and soon fully 64bit clean cpu designs are entering the market (in the form of the latest model iPhones). It’s not just a memory revolution, but it is definitely a driver in the market when we migrated from magnetic core memory (state of the art in 1951-52 while developed at MIT) to the Dynamic RAM chip (state of the art in 1968-69). That drive to develop the DRAM brought all other silicon based processes along with it and all the boats were raised. So here’s to the DRAM chip that helped spur the revolution. Without those shoulders, the giants of today wouldn’t be able to stand.

    Enhanced by Zemanta
  • Jon Udell on filter failure

    Jon Udell
    Jon Udell (Photo credit: Wikipedia)

    It’s time to engineer some filter failure

    Jon’s article points out his experience of the erosion of serendipity or at least opposing view points that social media enforces (somewhat) accidentally. I couldn’t agree more. One of the big promises of the Internet was that it was unimaginably vast and continuing to grow. The other big promise was that it was open in the way people could participate. There were no dictats or proscribed methods per se, but etiquette at best. There were FAQs to guide us, and rules of thumb to prevent us from embarrassing ourselves. But the Internet, It was something so vast one could never know or see everything that was out there, good or bad.

    But like the Wild est, search engines began fencing in the old prairie. At once both allowing us to get to the good stuff and waste less time doing important stuff. But therein lies the bargain of the “filter”, giving up control to an authority to help you do something with data or information. All the electrons/photons whizzing back and forth on the series of tubes exisiting all at once, available (more or less) all at once. But now with Social Neworks, like AOL before we suffer from the side effects of the filter.

    I remember being an AOL member, finally caving in and installing the app from some free floppy disk I would get in the mail at least once a week. I registered my credit card for the first free 20 hours (can you imagine?). And just like people who ‘try’ Netflix, I never unregistered. I lazily stayed the course and tried getting my money’s worth, spending more time online. At the same time ISPs, small mom and pop type shops were renting off parts of a Fractional T-1 leased line they owned, putting up modem pools and started selling access to the “Internet”. Nobody knew why you would want to do that with all teh kewl thingz one could do on AOL. Shopping, Chat Rooms, News, Stock quotes. It was ‘like’ the Internet. But not open and free and limitless like the Internet. And that’s where the failure begins to occur.

    AOL had to police it’s population, enforce some codes of conduct. They could kick you off, stop accepting your credit card payments. One could not be kicked of the ‘Internet’ in the same way, especially in those early days. But getting back to Jon’s point about filters that fail and allow you to see the whole world, discover an opposing viewpoint or better mulitple opposing viewpoints. That is the promise of the Internet, and we’re seeing less and less of it as we corral ourselves into our favorite brand name social networking community. I skipped MySpace, but I did jump on Flickr, and eventually Facebook. And in so doing gave up a little of that wildcat freedom and frontier-like experience of  dial-up over PPP or SLIP connection to a modem pool, doing a search first on Yahoo, then AltaVista, and then Google to find the important stuff.

    Enhanced by Zemanta
  • Follow-Up – EETimes on SanDisk UltraDIMMs

    Image representing IBM as depicted in CrunchBase
    Image via CrunchBase

    http://www.eetimes.com/document.asp?doc_id=1320775

    “The eXFlash DIMM is an option for IBM‘s System x3850 and x3950 X6 servers providing up to 12.8 TB of flash capacity. (Although just as this story was being written, IBM announced it was selling its x86 server business to Lenovo for $2.3 billion).”

    Sadly it seems the party is over before it even got started in the sales and shipping of UltraDIMM equipped IBM x86 servers. If Lenovo snatches up this product line, I’m sure all the customers will still be perfectly happy but I worry about that level of innovation and product testing that led to the introduction of UltraDIMM may be slowed.

    I’m not criticizing Lenovo for this, they have done a fine job taking over the laptops and desktop brand from IBM.  The motivation to keep on creating new, early samples of very risky and untried technologies seems to be more IBM’s interest in maintaining it’s technological lead in the data center. I don’t know how Lenovo figures into that equation. How much will Lenovo sell in the way of rackmount servers like the X6 line? And just recently there’s been rumblings that IBM wants to sell off it’s long history of doing semi-conductor manufacturing as well.

    It’s almost too much to think R&D would be given up by IBM in semi-conductors. Outside of Bell Labs, IBM’s fundamental work in this field brought things like silicon on insulator, copper interconnects and myriad other firsts to ever smaller, finer design rules. While Intel followed it’s own process R&D agenda, IBM went its own way too always trying to find advantage it’s in inventions. Albeit that blistering pace of patent filings means they will likely never see all the benefits of that Research and Development. At best IBM can only hope to enforce it’s patents in a Nathan Myhrvold like way, filing law suits on all infringers, protecting it’s intellectual property. That’s going to be a sad day for all of us who marveled at what they demoed, prototyped and manufactured. So long IBM, hello IBM Global Services.

    Enhanced by Zemanta
  • M00Cs! and the Academy where the hype meets the roadway

    Crowd in Willis Street, Wellington, awaiting t...
    Crowd in Willis Street, Wellington, awaiting the results of the 1931 general election, 1931 (Photo credit: National Library NZ on The Commons)

    http://campustechnology.com/articles/2014/01/27/inside-the-first-year-data-from-mitx-and-harvardx.aspx – Campus Technology

    “While 50 percent of MOOC registrants dropped off within a week or two of enrolling, attrition rates decreased substantially after that window.”

    So with a 50% attrition rate everyone has to keep in mind those overwhelmingly large enrollment are not representative of the typical definition of the word “student”. They are shopping. They are consumers who once they find something is not to their taste whisk away to the next most interesting thing. Hard to say what impact this has on people “waiting in line” if there’s a cap on total enrollees. Typically though the unlimited enrollment seems to be the norm for this style of teaching as well as unlimited in ‘length of time’. You can enroll/register after the course has completed. That however throws off the measurements of dropping out as the registration occurs outside the time of the class actively being conducted. So there’s still a lot of questions that need to be answered. More experiments designed to factor out the idiosyncracies of these open fora online.

    There is an interesting Q&A interview after the opening summary in this article talking with one of the primary researchers on MOOCs, Andrew Ho, from the Harvard Graduate School of Education. It’s hard to gauge “success” or to get accurate demographic information to help analyze the behavior of some MOOC enrollees. The second year of the experiments will hopefully yield better results, something like conclusions should be made after the second round. But Ho emphasizes we need more data from a wider sampling than just Harvard and MIT, that will confirm or help guide further research in the large scale, Massive Online Open Course (MOOC). As the cliché goes, the jury is still out on the value add of offering real college courses in the MOOC format.

    Enhanced by Zemanta