Blog

  • Simon’s Watchmakers and the Future of Courseware

    Mike Caulfield’s essay on Open Educational Resources, and what it would take to have remix/re-use resource for even courses of niche majors.

    mikecaulfield's avatarHapgood

    Herbert Simon, a Nobel Laureate known for his work in too many areas to count, used to tell a story of two watchmakers, Tempus and Hora. In the story Tempus and Hora make watches of similar complexity, both watches become popular, but as one watch becomes popular the watchmaker expands and becomes rich, and as the other becomes popular the maker is driven slowly out of business.

    What accounts for the difference? A closer look at their designs reveals the cause. The unsuccessful watchmaker (Tempus) has an assembly of a thousands parts, and for the watch to be working these must all be assembled at once; interruptions force the watchmaker to start over again from scratch. For a watch to be finished the watchmaker needs a large stretch of uninterrupted time.

    The other watchmaker (Hora) has chosen a different model for her watch: she uses subassemblies. So while there are…

    View original post 3,055 more words

  • Digital Images And The Amiga — Hackaday

    https://youtube.com/watch?v=ArvV1KPXYiA%3Fversion%3D3%26rel%3D1%26fs%3D1%26autohide%3D2%26showsearch%3D0%26showinfo%3D1%26iv_load_policy%3D1%26wmode%3Dtransparent

    There was a time in the late 80s and early 90s where the Amiga was the standard for computer graphics. Remember SeaQuest? That was an Amiga. The intro to Better Call Saul? That’s purposefully crappy, to look like it came out of an Amiga. When it comes to the Amiga and video, the first thing that comes to…

    via Digital Images And The Amiga — Hackaday

    In 1994 when I first matriculated into Visual Studies Workshop in Rochester NY, the Media Center there had a number of Amiga 1000 and one Amiga 3000 computers used exclusively with the edit bays and video editing rooms there. I remember using a Genloc box with an Amiga 1000 to generate titles with drop shadows for all my grad video projects. It was amazing how easy it was to do amazing things that would take another 6 years or more to do as easily on iMovie on the Mac. Those were the days.

  • Chromecast Vintage TV Is Magic — Hackaday

    https://youtube.com/watch?v=Bkuh3pqbn9o%3Fversion%3D3%26rel%3D1%26fs%3D1%26autohide%3D2%26showsearch%3D0%26showinfo%3D1%26iv_load_policy%3D1%26wmode%3Dtransparent

    When [Dr. Moddnstine] saw a 1978 General Electric TV in the trash, he just had to save it. As it turned out, it still worked! An idea hatched — what if he could turn it into a vintage Chromecast TV? He opened up the TV and started poking around inside. We should note that old…

    via Chromecast Vintage TV Is Magic — Hackaday

    Seems like everyone wants to make their own version of the “Console Living Room” ala Jim Groom formerly of University of Mary Washington.

    Call for 1980s Furniture: The Console Living Room Exhibit

  • From Bryan Alexander-Future Trends Forum #9 with Gardner Campbell: full recording, notes, and Storify — Bryan Alexander

    https://youtube.com/watch?v=br4dJDJkNW4%3Fversion%3D3%26rel%3D1%26fs%3D1%26autohide%3D2%26showsearch%3D0%26showinfo%3D1%26iv_load_policy%3D1%26wmode%3Dtransparent

    Last week we had Gardner Campbell on the Future Trends Forum, and the discussion hurtled along. Gardner, participants, and I explored pedagogy, the power of the hyperlink, data, instructors, institutions, eportfolios, language, students, assessment, a great card deck, our personal histories, and a lot more. Twitter activity started well, became excited, then spilled over past the […]

    via Future Trends Forum #9 with Gardner Campbell: full recording, notes, and Storify — Bryan Alexander

  • Several CAPI-Enabled Accelerators for OpenPOWER Servers Revealed — AnandTech

    Over a dozen special-purpose accelerators compatible with next-generation OpenPOWER servers that feature the Coherent Accelerator Processor Interface (CAPI) were revealed at the OpenPOWER Summit last week. These accelerators aim to help encourage the use of OpenPOWER based machines for technical and high-performance computing. Most of the accelerators are based on Xilinx high-performance FPGAs, but some…

    via Several CAPI-Enabled Accelerators for OpenPOWER Servers Revealed — AnandTech

  • This one photo,.. Piles of Things — CogDogBlog

    Bear with me as I step with trepidation into philosophical murk with this question: If one accumulates a great deal of small quantifiable things, does it necessarily, by accumulation, equate to something larger, more complex? Huh? Get to the tl;dr dude! No way. I am never that organized. I might not even be sure what…

    via Piles of Things — CogDogBlog

    I originally saw this photo above at the National Museum of the American Indian. While Alan Levine’s using it metaphorically (on his CogDogBlog) I would rather tell you what it is. It’s bison skulls collected and piled taller than a barn. Who killed them? Why did they kill so many of them. What purpose could killing so many living things serve? Let’s pin that one on a form of ethnic cleansing of the prairie lands. Lands that already given to the Indians in trade or out of threats of violence to get off more valuable land. But even that wasn’t enough. Every treaty and agreement was further eroded and re-evaluated. Indian Schools were put into operation. And bounties were placed on killing as many buffalo as one could shoot. So that’s what people did. Can imagine a “rancher” these days if someone drove though even just the public “grazing” land and proceeded to shoot the cattle? Wouldn’t you imagine law enforcement being involved, someone investigating the killing of the rancher’s livestock? But not so when it was buffalo and on the prairie lands. There was no investigation, no involvement with law enforcement. Extincting the bison was a choice, and all the people in power got on board. It’s so crazy that this ever happened anywhere on this planet. But it did.

  • Pathfinding — Q’s from Eric Meyer

    This is a thing I’ve been trying to figure out in my spare time, mostly noodling about in my head with various ideas when I have some down time, and now I want to know if there’s a formal answer of some sort. It goes like this: in a lot of situations, ranging from airplane…

    via Pathfinding — Thoughts From Eric

    The comments section is fairly informative on this question posed by Eric. There are algorithms for pathfinding, but it sounded like Eric wanted something like paths for things in motion. He emphasized what if you cannot stop and change direction point-to-point? One answer mentioned using smaller parts to help define what parts of a path are point-to-point, and which parts approximate a bezier curve with smoothness in between points. I’d say that was closer to how typical objects move even in a plane of 2 dimensions, because of speed and momentum. People, balls, bullets will always follow a somewhat bezier like curve. You just have to move the “handles” to approximate the magnitudes and directions.

  • Intel Stops The Tick-Tock Clock

    Although Intel has been producing chips based on the tick-tock pace for roughly a decade now, the last several ticks and tocks have not gone quite according to plan. The system began to break down after the Ivy Bridge tick. Ivy Bridge went off the beaten path a bit by bringing out a significantly improved iGPU architecture and a moderate improvement to CPU performance over the Sandy Bridge tock, which Intel referred to as a “tick+.”

    via Intel Stops The Tick-Tock Clock

    Consider this part 2 of a 3 part series looking at desktop computer technologies. Last week I wrote in detail about the latest Samsung product announcements in their SSD product line (specifically M2 form factor storage). My conclusion then was there’s a sort of “topping out” occurring slowly, spontaneously between different key industries that all funnel into computer manufacturing. Last week storage, today it’s CPU.

    The big notice was all the tech news sites simultaneously took Intel’s latest product announcements and turned them into an interesting “story”. Timing couldn’t have more fortuitous as former head of Intel, Andy Grove, passed away almost simultaneous to the story coming out. The importance of this cannot be overstated as Intel has controlled the narrative continuously even before their first gen CPU was being market the world over. Intel’s brain trust at the time they left to Fairchild to form Intel realized a trend in manufacturing. Gordon Moore is credited with giving it words, but everyone in Silicon Valley could “sense” or see it themselves. The rate at which designers/manufacturers were improving their product was correlated directly with the power requirements and the ruling size of the transistors on the wafers. Each new generation of design rule made the features smaller. The byproduct of that was the same devices (capacitor, transistor, gate) would use less power but could also run at a higher clock rate. Higher clocks mean faster data moving around for a slight increase in price. The price difference was due to the re-equipping of manufacturing lines to handle the re-designed wafers. Other process improvements included using larger wafers to hold all the die needing to be processed. Wafers went from 1″, 2″,4″,6″,8″,12″ and each time a new gen wafer was adopted, everyone retooled their production lines. Prices continued to increase for the first gen of the new chips, but eventually would fall in price as R&D was plowed into making the next gen chip.

    Moore’s Law as it became known was that every 6 months or so, components on a chip would decrease in size and run faster. Intel believed it and codified it into their schedules from 1971 onward and there’s even a graph showing how close they came to sticking with that, at least until the last year or two. That’s when bumps started to form and the chip rulings got closer to the minuscule 20-14nm feature size. For years everyone knew that as different mixes of chip manufacturing and processes occurred that CMOS (which didn’t really become king until mid to late 1970s) would hit a wall. No amount of physics or electrical engineering expertise, or material engineering could get around the limits of electrons moving through smaller and smaller features on a silicon chip. You can shrink stuff down infinitely but the electrons will stay the same size and be less predictable than they would at larger ruling sizes. Losses of energy through the gates or the oxide layers required more tricks (Finfet designs for gates, Silicon-on-insulator for oxide). At one time copper was the only way to keep things going as things got smaller. Lots of R&D was spent trying to find more reliability in light sources for exposing the photo-litho masks used to etch features into the silicon wafers. Talk of Extreme UV or X-ray light sources, phase-shift masks, a lot of science but none of it could stem the tide of  keeping the train rolling forever.

    So now even Intel’s CPU vaunted schedule for improvements is leveling out too. Not unlike the wall Samsung is hitting with it’s NAND chips in SSDs. Things are slowing down and hitting an even keel. Improvements in chips will be spread out ever widening time periods over a larger family of products that are more similar to the previous generation. You may not see 10% improvement in any aspect of a new chip on its release. No 10% power reduction, or speed improvements, or feature size shrinkage here. We’ll be lucky to get 2-3% changes in any one of those critical design aspects we always associated with Moore’s Law? The best we can hope for is gradual price reductions and possibly greater reliability over time, maybe. But 5Ghz clocks, 80 core cpus, 1 nanometer feature sizes are NOT going to happen. In all the articles written last week the only room for improvement stated categorically was to dump CMOS (complimentary metal oxide semiconductors). Instead going for a more expensive, exotic material (Indium Gallium Arsenide) is the suggested logical way forward. These mixes of metals have been used for military work since the 1950s in high performance gear, or radiation hardened chips. Until now, there was no justification to commercialize these technologies for the consumer market. But how is Intel going to pay it’s bills unless it can charge more for each new generation CPU? It will be interesting, but I suspect the same as I wrote last week, we’re seeing a long slow flattening and plateau of the desktop computer as we know it. Storage and CPU have hit peaks. I dare say we’re close to that too in one last remaining technology in the Desktop  computer: The GPU (stay tuned for that next week).

  • Samsung Shows Off SM961 and PM961 SSDs: OEM Drives Get a Boost

    The Samsung SM961 will be Samsung’s new top-of-the-range M.2 SSD line for OEMs, which will be offered in 128 GB, 256 GB, 512 GB and 1 TB configurations (by contrast, the SM951 family did not include a 1 TB option). The drive will be based on Samsung’s MLC V-NAND as well as the company’s Polaris controller. Samsung is specing the SM961 at up to 3200 MB/s for sequential reads and up to 1800 MB/s for sequential writes, but does not specify which models will boast with such numbers. The new SSDs can perform up to 450K random read IOPS as well as up to 400K random write IOPS, which looks more like performance of server-grade SSDs.

    via Samsung Shows Off SM961 and PM961 SSDs: OEM Drives Get a Boost.

    Originally 6-7 years ago when SSDs were offered up in very small sizes say 32GB or 40GB, it was painfully obvious they were a niche product. If you could get WindowsXP to run in 32GB and save all your data to a USB Thumb drive you might be able to downgrade a laptop and gain some speed. Ditching a spinning magnetic hard drive was a guarantee you might get a boost (even 10% might be worth the trouble of swapping to a smaller SSD). Data rates too improved each time a new product was announced. Independent hardware designers like Crucial and OCZ were putting together various suppliers chipsets and NAND controllers and getting reviews in all the hardware comparison sites. Everyone was waiting for the next gen SandForce NAND controller to see how much better it would be at Random Reads/Writes. Progress was slow but steady, prices were fairly steady too, no big drops as new gen hardware hit the shelves of online store fronts.

    Eventually bigger names got in the game like longtime stalwart SanDisk, but then too Intel and Samsung. Top performers were going after the high, high end with PCI based Flash drives running on PCI Express and getting double to triple the speed of a SATA based SSD. Eventually SATA became SATA2, and PCI Express became PCIe 2.0. All these entered into the mix of higher performance and still very steady prices especially on the high-end. You almost always had to pay $1200 to get a halfway speedy PCIe SSD. But the true benefit was the amount of performance you got over the ACHI/SATA interface built into the motherboard. I wrote, linked and commented a lot on the way forward, urging anyone reading this to keep looking ahead to the performance end of the spectrum.

    By the time Samsung and Intel really started ramping up product the middle ground and bottom feeders of the SSD market were starting to compete on price alone. SATA based SSDs hit hard limit of ~250MB/sec up to 500MB/sec random read and write. And more likely it was below 500MB/sec as that was the physical upper limit of the speed of a SATA hard drive (due in no small part to the design being for spinning magnetized platters). Suffice to say Samsung and Intel could bide their time making new NAND chips and memory controllers and plopping them in new products for the high/middle markets for SSD. Eventually they could license these same technologies to the bottom end of the market after the price premiums were collected on sales of newly announced products. It became clear Intel and Samsung were owning SSDs but eventually too they would dominate PCIe SSDs and a nascent M.2 form factor for small light laptops.

    All that is what has proceeded today’s press release by Samsung. This product announcement (quoted at the top of this article) follows fast on the heels of yesterday’s (Tuesday March 22) announcementof another SSD. That one was an SSD on a chip in essence. By itself that announcement was reason enough to share it with anyone I knew who follows technology news. But today’s announcement (the one which is more interesting) further emphasizes Samsung is on a tear and is single-handed pushing the technology to it’s physical limits. What does that mean?

    The original SSDs we knew from 6-7 years ago ran in the range of 150MB/sec random read/write. That eventually would top out and has stayed at ~500MB/sec. A SATA based SSD will never exceed that limit. PCIe 3.0 devices with 4 data lanes have a top limit too. And the M.2 form factor can use that very interface as it’s bridge into the motherboard. Fastest consumer level PCIe SSDs I remember seeing were always in the 1,000MB/sec range as high as 1,200MB/sec and again they were expensive, always around $1,200 USD. But now as of today Samsung has designed and produced a device that does 3,000MB/sec and has utterly saturated the design spec for PCIe 3.0 at 4X. The IOPs, the benchmark measure of any storage based chip, hard drive technology is also massive with the new devices. Again 6-7 years ago there were shootout/benchmark competitions to see who could build the first 1Million IOPs storage array. This was for enterprise architects who had FiberChannel switches, and SAS spinning disks running in parallel with all kinds of RAID storage controllers, and file servers doing the tests.

    As a first, 1Million IOPs (input-output operations) was a big deal and millions of dollars were spent trying to hit that mark or show that it could be hit with a some mix of a vendor’s product. Today, with Samsung’s announcement of the SM961 M.2 drive you’ve got in the size of a stick of gum a device that will theoretically do 1/2 of 1Million IOPs. Two devices paired up in a RAID 0 config WILL do ~1Million IOPs in the space of “2 sticks of gum”. That is a giant leap in the 6-7 years in which NAND based memory controllers and chip packaging and memory cells have been designed. All one can say is now theoretically speaking hard drives are no longer the performance choke point for a desktop or even a laptop computer. We have storage that can move data in and out of memory at 3,000MB/sec. What’s the next hurdle? What possible hard limit in computer technology is the next one that needs to be addressed? Is it power? Does all this need to run off of a thin/light Lithium Ion battery? I don’t know. But I can imagine there’s some design team working on it right now. I just feel we’re coming to the end of a long evolution in the fundamental building blocks of desktop PCs. It’s all small refinements and polishing now that the heavy lifting is done.

  • The Deconstructed Dissertation

    From Dr. Laura Gogia @VCU who’s been dissertating for a while and now is thinking, “what else can I do with all this research on Connected Learning”?

    It’s been 17 days since I’ve successfully defended my dissertation.  Since then, I’ve made my edits, published the dissertation under a CC-BY-SA license on four platforms (ProQues…

    Source: The Deconstructed Dissertation