Recently we started a series on the components used to assemble a circuit board. The first issue was on dispensing solder paste. Moving down the assembly line, with the paste already on the board, the next step is getting the components onto the PCB. We’re just going to address SMT components in this issue, because…
Blog
-
Very interesting article on fabbing electronics
-
Another round of Future Trends with Bryan Alexander: Open Ed and Creative Commons
Where does open education stand in 2016? On April 14th Cable Green, many participants, and I explored this question on the 11th Future Trends Forum. The chat box went wild with observations, lightning-fast links, and questions. Twitter discussion went well, so I Storified it here. You can find our video and audio recording at YouTube, […]
via Open education’s long revolution: Cable Green on Future Trends Forum #11 — Bryan Alexander
-
Simon’s Watchmakers and the Future of Courseware
Mike Caulfield’s essay on Open Educational Resources, and what it would take to have remix/re-use resource for even courses of niche majors.
Herbert Simon, a Nobel Laureate known for his work in too many areas to count, used to tell a story of two watchmakers, Tempus and Hora. In the story Tempus and Hora make watches of similar complexity, both watches become popular, but as one watch becomes popular the watchmaker expands and becomes rich, and as the other becomes popular the maker is driven slowly out of business.
What accounts for the difference? A closer look at their designs reveals the cause. The unsuccessful watchmaker (Tempus) has an assembly of a thousands parts, and for the watch to be working these must all be assembled at once; interruptions force the watchmaker to start over again from scratch. For a watch to be finished the watchmaker needs a large stretch of uninterrupted time.
The other watchmaker (Hora) has chosen a different model for her watch: she uses subassemblies. So while there are…
View original post 3,055 more words
-
Digital Images And The Amiga — Hackaday
There was a time in the late 80s and early 90s where the Amiga was the standard for computer graphics. Remember SeaQuest? That was an Amiga. The intro to Better Call Saul? That’s purposefully crappy, to look like it came out of an Amiga. When it comes to the Amiga and video, the first thing that comes to…
via Digital Images And The Amiga — Hackaday
In 1994 when I first matriculated into Visual Studies Workshop in Rochester NY, the Media Center there had a number of Amiga 1000 and one Amiga 3000 computers used exclusively with the edit bays and video editing rooms there. I remember using a Genloc box with an Amiga 1000 to generate titles with drop shadows for all my grad video projects. It was amazing how easy it was to do amazing things that would take another 6 years or more to do as easily on iMovie on the Mac. Those were the days.
-
Chromecast Vintage TV Is Magic — Hackaday
When [Dr. Moddnstine] saw a 1978 General Electric TV in the trash, he just had to save it. As it turned out, it still worked! An idea hatched — what if he could turn it into a vintage Chromecast TV? He opened up the TV and started poking around inside. We should note that old…
via Chromecast Vintage TV Is Magic — Hackaday
Seems like everyone wants to make their own version of the “Console Living Room” ala Jim Groom formerly of University of Mary Washington.
-
From Bryan Alexander-Future Trends Forum #9 with Gardner Campbell: full recording, notes, and Storify — Bryan Alexander
Last week we had Gardner Campbell on the Future Trends Forum, and the discussion hurtled along. Gardner, participants, and I explored pedagogy, the power of the hyperlink, data, instructors, institutions, eportfolios, language, students, assessment, a great card deck, our personal histories, and a lot more. Twitter activity started well, became excited, then spilled over past the […]
via Future Trends Forum #9 with Gardner Campbell: full recording, notes, and Storify — Bryan Alexander
-
Several CAPI-Enabled Accelerators for OpenPOWER Servers Revealed — AnandTech
Over a dozen special-purpose accelerators compatible with next-generation OpenPOWER servers that feature the Coherent Accelerator Processor Interface (CAPI) were revealed at the OpenPOWER Summit last week. These accelerators aim to help encourage the use of OpenPOWER based machines for technical and high-performance computing. Most of the accelerators are based on Xilinx high-performance FPGAs, but some…
via Several CAPI-Enabled Accelerators for OpenPOWER Servers Revealed — AnandTech
-
This one photo,.. Piles of Things — CogDogBlog
Bear with me as I step with trepidation into philosophical murk with this question: If one accumulates a great deal of small quantifiable things, does it necessarily, by accumulation, equate to something larger, more complex? Huh? Get to the tl;dr dude! No way. I am never that organized. I might not even be sure what…
via Piles of Things — CogDogBlog
I originally saw this photo above at the National Museum of the American Indian. While Alan Levine’s using it metaphorically (on his CogDogBlog) I would rather tell you what it is. It’s bison skulls collected and piled taller than a barn. Who killed them? Why did they kill so many of them. What purpose could killing so many living things serve? Let’s pin that one on a form of ethnic cleansing of the prairie lands. Lands that already given to the Indians in trade or out of threats of violence to get off more valuable land. But even that wasn’t enough. Every treaty and agreement was further eroded and re-evaluated. Indian Schools were put into operation. And bounties were placed on killing as many buffalo as one could shoot. So that’s what people did. Can imagine a “rancher” these days if someone drove though even just the public “grazing” land and proceeded to shoot the cattle? Wouldn’t you imagine law enforcement being involved, someone investigating the killing of the rancher’s livestock? But not so when it was buffalo and on the prairie lands. There was no investigation, no involvement with law enforcement. Extincting the bison was a choice, and all the people in power got on board. It’s so crazy that this ever happened anywhere on this planet. But it did.
-
Pathfinding — Q’s from Eric Meyer
This is a thing I’ve been trying to figure out in my spare time, mostly noodling about in my head with various ideas when I have some down time, and now I want to know if there’s a formal answer of some sort. It goes like this: in a lot of situations, ranging from airplane…
via Pathfinding — Thoughts From Eric
The comments section is fairly informative on this question posed by Eric. There are algorithms for pathfinding, but it sounded like Eric wanted something like paths for things in motion. He emphasized what if you cannot stop and change direction point-to-point? One answer mentioned using smaller parts to help define what parts of a path are point-to-point, and which parts approximate a bezier curve with smoothness in between points. I’d say that was closer to how typical objects move even in a plane of 2 dimensions, because of speed and momentum. People, balls, bullets will always follow a somewhat bezier like curve. You just have to move the “handles” to approximate the magnitudes and directions.
-
Intel Stops The Tick-Tock Clock
Although Intel has been producing chips based on the tick-tock pace for roughly a decade now, the last several ticks and tocks have not gone quite according to plan. The system began to break down after the Ivy Bridge tick. Ivy Bridge went off the beaten path a bit by bringing out a significantly improved iGPU architecture and a moderate improvement to CPU performance over the Sandy Bridge tock, which Intel referred to as a “tick+.”
via Intel Stops The Tick-Tock Clock
Consider this part 2 of a 3 part series looking at desktop computer technologies. Last week I wrote in detail about the latest Samsung product announcements in their SSD product line (specifically M2 form factor storage). My conclusion then was there’s a sort of “topping out” occurring slowly, spontaneously between different key industries that all funnel into computer manufacturing. Last week storage, today it’s CPU.
The big notice was all the tech news sites simultaneously took Intel’s latest product announcements and turned them into an interesting “story”. Timing couldn’t have more fortuitous as former head of Intel, Andy Grove, passed away almost simultaneous to the story coming out. The importance of this cannot be overstated as Intel has controlled the narrative continuously even before their first gen CPU was being market the world over. Intel’s brain trust at the time they left to Fairchild to form Intel realized a trend in manufacturing. Gordon Moore is credited with giving it words, but everyone in Silicon Valley could “sense” or see it themselves. The rate at which designers/manufacturers were improving their product was correlated directly with the power requirements and the ruling size of the transistors on the wafers. Each new generation of design rule made the features smaller. The byproduct of that was the same devices (capacitor, transistor, gate) would use less power but could also run at a higher clock rate. Higher clocks mean faster data moving around for a slight increase in price. The price difference was due to the re-equipping of manufacturing lines to handle the re-designed wafers. Other process improvements included using larger wafers to hold all the die needing to be processed. Wafers went from 1″, 2″,4″,6″,8″,12″ and each time a new gen wafer was adopted, everyone retooled their production lines. Prices continued to increase for the first gen of the new chips, but eventually would fall in price as R&D was plowed into making the next gen chip.
Moore’s Law as it became known was that every 6 months or so, components on a chip would decrease in size and run faster. Intel believed it and codified it into their schedules from 1971 onward and there’s even a graph showing how close they came to sticking with that, at least until the last year or two. That’s when bumps started to form and the chip rulings got closer to the minuscule 20-14nm feature size. For years everyone knew that as different mixes of chip manufacturing and processes occurred that CMOS (which didn’t really become king until mid to late 1970s) would hit a wall. No amount of physics or electrical engineering expertise, or material engineering could get around the limits of electrons moving through smaller and smaller features on a silicon chip. You can shrink stuff down infinitely but the electrons will stay the same size and be less predictable than they would at larger ruling sizes. Losses of energy through the gates or the oxide layers required more tricks (Finfet designs for gates, Silicon-on-insulator for oxide). At one time copper was the only way to keep things going as things got smaller. Lots of R&D was spent trying to find more reliability in light sources for exposing the photo-litho masks used to etch features into the silicon wafers. Talk of Extreme UV or X-ray light sources, phase-shift masks, a lot of science but none of it could stem the tide of keeping the train rolling forever.
So now even Intel’s CPU vaunted schedule for improvements is leveling out too. Not unlike the wall Samsung is hitting with it’s NAND chips in SSDs. Things are slowing down and hitting an even keel. Improvements in chips will be spread out ever widening time periods over a larger family of products that are more similar to the previous generation. You may not see 10% improvement in any aspect of a new chip on its release. No 10% power reduction, or speed improvements, or feature size shrinkage here. We’ll be lucky to get 2-3% changes in any one of those critical design aspects we always associated with Moore’s Law? The best we can hope for is gradual price reductions and possibly greater reliability over time, maybe. But 5Ghz clocks, 80 core cpus, 1 nanometer feature sizes are NOT going to happen. In all the articles written last week the only room for improvement stated categorically was to dump CMOS (complimentary metal oxide semiconductors). Instead going for a more expensive, exotic material (Indium Gallium Arsenide) is the suggested logical way forward. These mixes of metals have been used for military work since the 1950s in high performance gear, or radiation hardened chips. Until now, there was no justification to commercialize these technologies for the consumer market. But how is Intel going to pay it’s bills unless it can charge more for each new generation CPU? It will be interesting, but I suspect the same as I wrote last week, we’re seeing a long slow flattening and plateau of the desktop computer as we know it. Storage and CPU have hit peaks. I dare say we’re close to that too in one last remaining technology in the Desktop computer: The GPU (stay tuned for that next week).






