Category: technology

General technology, not anything in particular

  • Pathfinding — Q’s from Eric Meyer

    This is a thing I’ve been trying to figure out in my spare time, mostly noodling about in my head with various ideas when I have some down time, and now I want to know if there’s a formal answer of some sort. It goes like this: in a lot of situations, ranging from airplane…

    via Pathfinding — Thoughts From Eric

    The comments section is fairly informative on this question posed by Eric. There are algorithms for pathfinding, but it sounded like Eric wanted something like paths for things in motion. He emphasized what if you cannot stop and change direction point-to-point? One answer mentioned using smaller parts to help define what parts of a path are point-to-point, and which parts approximate a bezier curve with smoothness in between points. I’d say that was closer to how typical objects move even in a plane of 2 dimensions, because of speed and momentum. People, balls, bullets will always follow a somewhat bezier like curve. You just have to move the “handles” to approximate the magnitudes and directions.

  • Intel Stops The Tick-Tock Clock

    Although Intel has been producing chips based on the tick-tock pace for roughly a decade now, the last several ticks and tocks have not gone quite according to plan. The system began to break down after the Ivy Bridge tick. Ivy Bridge went off the beaten path a bit by bringing out a significantly improved iGPU architecture and a moderate improvement to CPU performance over the Sandy Bridge tock, which Intel referred to as a “tick+.”

    via Intel Stops The Tick-Tock Clock

    Consider this part 2 of a 3 part series looking at desktop computer technologies. Last week I wrote in detail about the latest Samsung product announcements in their SSD product line (specifically M2 form factor storage). My conclusion then was there’s a sort of “topping out” occurring slowly, spontaneously between different key industries that all funnel into computer manufacturing. Last week storage, today it’s CPU.

    The big notice was all the tech news sites simultaneously took Intel’s latest product announcements and turned them into an interesting “story”. Timing couldn’t have more fortuitous as former head of Intel, Andy Grove, passed away almost simultaneous to the story coming out. The importance of this cannot be overstated as Intel has controlled the narrative continuously even before their first gen CPU was being market the world over. Intel’s brain trust at the time they left to Fairchild to form Intel realized a trend in manufacturing. Gordon Moore is credited with giving it words, but everyone in Silicon Valley could “sense” or see it themselves. The rate at which designers/manufacturers were improving their product was correlated directly with the power requirements and the ruling size of the transistors on the wafers. Each new generation of design rule made the features smaller. The byproduct of that was the same devices (capacitor, transistor, gate) would use less power but could also run at a higher clock rate. Higher clocks mean faster data moving around for a slight increase in price. The price difference was due to the re-equipping of manufacturing lines to handle the re-designed wafers. Other process improvements included using larger wafers to hold all the die needing to be processed. Wafers went from 1″, 2″,4″,6″,8″,12″ and each time a new gen wafer was adopted, everyone retooled their production lines. Prices continued to increase for the first gen of the new chips, but eventually would fall in price as R&D was plowed into making the next gen chip.

    Moore’s Law as it became known was that every 6 months or so, components on a chip would decrease in size and run faster. Intel believed it and codified it into their schedules from 1971 onward and there’s even a graph showing how close they came to sticking with that, at least until the last year or two. That’s when bumps started to form and the chip rulings got closer to the minuscule 20-14nm feature size. For years everyone knew that as different mixes of chip manufacturing and processes occurred that CMOS (which didn’t really become king until mid to late 1970s) would hit a wall. No amount of physics or electrical engineering expertise, or material engineering could get around the limits of electrons moving through smaller and smaller features on a silicon chip. You can shrink stuff down infinitely but the electrons will stay the same size and be less predictable than they would at larger ruling sizes. Losses of energy through the gates or the oxide layers required more tricks (Finfet designs for gates, Silicon-on-insulator for oxide). At one time copper was the only way to keep things going as things got smaller. Lots of R&D was spent trying to find more reliability in light sources for exposing the photo-litho masks used to etch features into the silicon wafers. Talk of Extreme UV or X-ray light sources, phase-shift masks, a lot of science but none of it could stem the tide of  keeping the train rolling forever.

    So now even Intel’s CPU vaunted schedule for improvements is leveling out too. Not unlike the wall Samsung is hitting with it’s NAND chips in SSDs. Things are slowing down and hitting an even keel. Improvements in chips will be spread out ever widening time periods over a larger family of products that are more similar to the previous generation. You may not see 10% improvement in any aspect of a new chip on its release. No 10% power reduction, or speed improvements, or feature size shrinkage here. We’ll be lucky to get 2-3% changes in any one of those critical design aspects we always associated with Moore’s Law? The best we can hope for is gradual price reductions and possibly greater reliability over time, maybe. But 5Ghz clocks, 80 core cpus, 1 nanometer feature sizes are NOT going to happen. In all the articles written last week the only room for improvement stated categorically was to dump CMOS (complimentary metal oxide semiconductors). Instead going for a more expensive, exotic material (Indium Gallium Arsenide) is the suggested logical way forward. These mixes of metals have been used for military work since the 1950s in high performance gear, or radiation hardened chips. Until now, there was no justification to commercialize these technologies for the consumer market. But how is Intel going to pay it’s bills unless it can charge more for each new generation CPU? It will be interesting, but I suspect the same as I wrote last week, we’re seeing a long slow flattening and plateau of the desktop computer as we know it. Storage and CPU have hit peaks. I dare say we’re close to that too in one last remaining technology in the Desktop  computer: The GPU (stay tuned for that next week).

  • Samsung Shows Off SM961 and PM961 SSDs: OEM Drives Get a Boost

    The Samsung SM961 will be Samsung’s new top-of-the-range M.2 SSD line for OEMs, which will be offered in 128 GB, 256 GB, 512 GB and 1 TB configurations (by contrast, the SM951 family did not include a 1 TB option). The drive will be based on Samsung’s MLC V-NAND as well as the company’s Polaris controller. Samsung is specing the SM961 at up to 3200 MB/s for sequential reads and up to 1800 MB/s for sequential writes, but does not specify which models will boast with such numbers. The new SSDs can perform up to 450K random read IOPS as well as up to 400K random write IOPS, which looks more like performance of server-grade SSDs.

    via Samsung Shows Off SM961 and PM961 SSDs: OEM Drives Get a Boost.

    Originally 6-7 years ago when SSDs were offered up in very small sizes say 32GB or 40GB, it was painfully obvious they were a niche product. If you could get WindowsXP to run in 32GB and save all your data to a USB Thumb drive you might be able to downgrade a laptop and gain some speed. Ditching a spinning magnetic hard drive was a guarantee you might get a boost (even 10% might be worth the trouble of swapping to a smaller SSD). Data rates too improved each time a new product was announced. Independent hardware designers like Crucial and OCZ were putting together various suppliers chipsets and NAND controllers and getting reviews in all the hardware comparison sites. Everyone was waiting for the next gen SandForce NAND controller to see how much better it would be at Random Reads/Writes. Progress was slow but steady, prices were fairly steady too, no big drops as new gen hardware hit the shelves of online store fronts.

    Eventually bigger names got in the game like longtime stalwart SanDisk, but then too Intel and Samsung. Top performers were going after the high, high end with PCI based Flash drives running on PCI Express and getting double to triple the speed of a SATA based SSD. Eventually SATA became SATA2, and PCI Express became PCIe 2.0. All these entered into the mix of higher performance and still very steady prices especially on the high-end. You almost always had to pay $1200 to get a halfway speedy PCIe SSD. But the true benefit was the amount of performance you got over the ACHI/SATA interface built into the motherboard. I wrote, linked and commented a lot on the way forward, urging anyone reading this to keep looking ahead to the performance end of the spectrum.

    By the time Samsung and Intel really started ramping up product the middle ground and bottom feeders of the SSD market were starting to compete on price alone. SATA based SSDs hit hard limit of ~250MB/sec up to 500MB/sec random read and write. And more likely it was below 500MB/sec as that was the physical upper limit of the speed of a SATA hard drive (due in no small part to the design being for spinning magnetized platters). Suffice to say Samsung and Intel could bide their time making new NAND chips and memory controllers and plopping them in new products for the high/middle markets for SSD. Eventually they could license these same technologies to the bottom end of the market after the price premiums were collected on sales of newly announced products. It became clear Intel and Samsung were owning SSDs but eventually too they would dominate PCIe SSDs and a nascent M.2 form factor for small light laptops.

    All that is what has proceeded today’s press release by Samsung. This product announcement (quoted at the top of this article) follows fast on the heels of yesterday’s (Tuesday March 22) announcementof another SSD. That one was an SSD on a chip in essence. By itself that announcement was reason enough to share it with anyone I knew who follows technology news. But today’s announcement (the one which is more interesting) further emphasizes Samsung is on a tear and is single-handed pushing the technology to it’s physical limits. What does that mean?

    The original SSDs we knew from 6-7 years ago ran in the range of 150MB/sec random read/write. That eventually would top out and has stayed at ~500MB/sec. A SATA based SSD will never exceed that limit. PCIe 3.0 devices with 4 data lanes have a top limit too. And the M.2 form factor can use that very interface as it’s bridge into the motherboard. Fastest consumer level PCIe SSDs I remember seeing were always in the 1,000MB/sec range as high as 1,200MB/sec and again they were expensive, always around $1,200 USD. But now as of today Samsung has designed and produced a device that does 3,000MB/sec and has utterly saturated the design spec for PCIe 3.0 at 4X. The IOPs, the benchmark measure of any storage based chip, hard drive technology is also massive with the new devices. Again 6-7 years ago there were shootout/benchmark competitions to see who could build the first 1Million IOPs storage array. This was for enterprise architects who had FiberChannel switches, and SAS spinning disks running in parallel with all kinds of RAID storage controllers, and file servers doing the tests.

    As a first, 1Million IOPs (input-output operations) was a big deal and millions of dollars were spent trying to hit that mark or show that it could be hit with a some mix of a vendor’s product. Today, with Samsung’s announcement of the SM961 M.2 drive you’ve got in the size of a stick of gum a device that will theoretically do 1/2 of 1Million IOPs. Two devices paired up in a RAID 0 config WILL do ~1Million IOPs in the space of “2 sticks of gum”. That is a giant leap in the 6-7 years in which NAND based memory controllers and chip packaging and memory cells have been designed. All one can say is now theoretically speaking hard drives are no longer the performance choke point for a desktop or even a laptop computer. We have storage that can move data in and out of memory at 3,000MB/sec. What’s the next hurdle? What possible hard limit in computer technology is the next one that needs to be addressed? Is it power? Does all this need to run off of a thin/light Lithium Ion battery? I don’t know. But I can imagine there’s some design team working on it right now. I just feel we’re coming to the end of a long evolution in the fundamental building blocks of desktop PCs. It’s all small refinements and polishing now that the heavy lifting is done.

  • The Deconstructed Dissertation

    From Dr. Laura Gogia @VCU who’s been dissertating for a while and now is thinking, “what else can I do with all this research on Connected Learning”?

    It’s been 17 days since I’ve successfully defended my dissertation.  Since then, I’ve made my edits, published the dissertation under a CC-BY-SA license on four platforms (ProQues…

    Source: The Deconstructed Dissertation

  • Snapchat 101

    I think it’s the so-called “emphemeral” nature of ANY content that young people find attractive. But wasn’t there a breach this past year where some of the “emphemeral” content got leaked? I mean how secure is this stuff if it’s meant to be guarded or protected based on how long it’s available. This feels very much like a yik yak fashionable app that’s going to vanish unexpectedly.

  • End-user programmers are at least half of all programmers

    I’ve known web designers who merely layout pages in CSS and mark up “copy” with HTML tags. In their view they are “coding” even though they are just formatting and marking up. The use of that term amongst the creative types for web, and mobile apps is a long slipperly slope. A scope creep to be sure.

    Mark Guzdial's avatarComputing Ed Research – Guzdial's Take

    I was intrigued to see this post during CS Ed Week from ChangeTheEquation.org. They’re revisiting the Scaffidi, Shaw, and Myers question from 2005 (mentioned in this blog post).

    You may be surprised to learn that nearly DOUBLE the number of workers use computing than originally thought.  Our new research infographic shows that 7.7 million people use complex computing in their jobs — that’s 3.9 million more than the U.S. Bureau of Labor and Statistics (BLS) reports. We examined a major international dataset that looks past job titles to see what skills people actually use on the job. It turns out that the need for complex computer skills extends far beyond what the BLS currently classifies as computer occupations. Even more reason why computer science education is more critical than ever!

    Source: The Hidden Half | Change the Equation

    ChangeTheEquation.org is coming up with a much lower estimate…

    View original post 146 more words

  • Reverse Engineering The iPhone’s Ancestor

    Source: Reverse Engineering The iPhone’s Ancestor

    Interesting brief account of Advanced Risc Machines (ARM) and its current role as the cpu in a lot of devices. Do read the comments section, it’s better than the original story as the history accounts are more detailed there.

  • Why Facebook Won, and Other Hard Truths

    Some great points by Mike Caulfield. UXD (User Experience Design) is what really sets “social media” apart from it’s precursors. I’ll buy that.

    mikecaulfield's avatarHapgood

    A lot of people have been tweeting and emailing me and DM-ing me the recent Guardian piece by Iran’s “blogfather”.

    You should read it yourself, but in short it is the story of a man sent to jail for blogging in Iran at the height of blogging’s influence and coming out of jail many years later to find that Facebook, Twitter, and Instagram have “killed the web.”

    In a related conversation yesterday we were talking about the New York Times article featuring a cast of merry Luddites talking about escaping the endless grind of Facebook and the stream.

    And yes, The Stream Won. I get it. But when you ask revolutionaries *why* it won the answer you get, more or less, is that Evil Facebook and Twitter and Instagram hid the truth from the larger population and fooled them into thinking they wanted Evil Facebook and Instagram and Twitter…

    View original post 604 more words

  • The Functions of Education-Technology Criticism

    Audrey Watters (Source: The Functions of Education-Technology Criticism)

    Really good work of writing about the often times over promotion of educational technology over real actual learning. Much appreciated on my part and worth a read if you find yourself in the unenviable position of questioning what is we actually DO here in higher ed.

  • A new lease on life

    Wow, now that’s a happy ending if I ever read one. I remember Jon’s earlier posts about his pain. Glad to know he found a resolution and it’s working for him. It gives me hope too! Sometimes there is a medically viable solution that’s better than living with the pain. And it sounds like Jon wil also get ever penny’s worth of value out of his hip replacement.

    Jon Udell's avatarJon Udell

    Two weeks ago I underwent surgery to replace both of my hip joints. I’d been having trouble since the summer of 2012, when running became painful and I found that I couldn’t mount my bicycle by swinging my leg over the seat. These were signs of what would be diagnosed, in April 2015, as moderate-to-severe osteoarthritis. It could have been diagnosed two years earlier, when I presented symptoms I identified as a groin pull and pain in my quads. But I was still able to be very active then and, after a round of physical therapy, regained much of the range of motion I’d lost the year before. Deep down I knew something was really wrong but I convinced my doctor and physical therapist that it was all muscular, that I’d be able to work through it, and that there was no need for an x-ray.

    In reality the disease…

    View original post 1,218 more words