Blog

  • AnandTech | The Pixel Density Race and its Technical Merits

    Italiano: Descrizione di un pixel
    Italiano: Descrizione di un pixel (Photo credit: Wikipedia)

    If there is any single number that people point to for resolution, it is the 1 arcminute value that Apple uses to indicate a “Retina Display”.

    via AnandTech | The Pixel Density Race and its Technical Merits.

    Earlier in my job where I work, I had to try and recommend the resolution people needed to get a good picture using a scanner or a digital camera. As we know the resolution arms race knows no bounds. First in scanners then in digital cameras. The same is true now for displays. How fine is fine enough. Is it noticeable, is it beneficial? The technical limits that enforce lower resolution usually are tied to costs. For the consumer level product cost has to fit into a narrow range, and the perceived benefit of “higher quality” or sharpness are rarely enough to get someone to spend more. But as phones can be upgraded for free and printers and scanners are now commodity items, you just keep slowly migrating up to the next model for little to no entry threshold cost. And everything is just ‘better’, all higher rez, and therefore by association higher quality, sharper, etc.

    I used to quote or try to pin down a rule of thumb I found once regarding the acuity of the human eye. Some of this was just gained  by noticing things when I started out using Photoshop and trying to print to Imagesetters and Laser Printers. At some point in the past someone decided 300 dpi is what a laser printer needed in order to reproduce text on letter size paper. As for displays, I bumped into a quote from an IBM study on visual acuity that indicated the human eye can discern display pixels in the 225 ppi range. I tried many times to find the actual publication where that appears so I could site it. But no luck, I only found it as a footnote on a webpage from another manufacturer. Now in this article we get more stats on human vision, much more extensive than that vague footnote all those years ago.

    What can one conclude from all the data in this article? Just the same thing, that resolution arms races are still being waged by manufacturers. This time however it’s in mobile phones, not printers, not scanners, not digital cameras. Those battles were fought and now there’s damned little product differentiation. Mobile phones will fall into that pattern and people will be less and less Apple fanbois or Samsung fanbois. We’ll all just upgrade to a newer version of whatever phone is cheap and expect to always have the increased spec hardware, and higher resolution, better quality, all that jazz. It is one more case where everything old is new again. My suspicion is we’ll see this happen when a true VR goggle hits the market with real competitors attempting to gain advantage with technical superiority or more research and development. Bring on the the VR Wars I say.

    Enhanced by Zemanta
  • Jon Udell on filter failure

    Jon Udell
    Jon Udell (Photo credit: Wikipedia)

    It’s time to engineer some filter failure

    Jon’s article points out his experience of the erosion of serendipity or at least opposing view points that social media enforces (somewhat) accidentally. I couldn’t agree more. One of the big promises of the Internet was that it was unimaginably vast and continuing to grow. The other big promise was that it was open in the way people could participate. There were no dictats or proscribed methods per se, but etiquette at best. There were FAQs to guide us, and rules of thumb to prevent us from embarrassing ourselves. But the Internet, It was something so vast one could never know or see everything that was out there, good or bad.

    But like the Wild est, search engines began fencing in the old prairie. At once both allowing us to get to the good stuff and waste less time doing important stuff. But therein lies the bargain of the “filter”, giving up control to an authority to help you do something with data or information. All the electrons/photons whizzing back and forth on the series of tubes exisiting all at once, available (more or less) all at once. But now with Social Neworks, like AOL before we suffer from the side effects of the filter.

    I remember being an AOL member, finally caving in and installing the app from some free floppy disk I would get in the mail at least once a week. I registered my credit card for the first free 20 hours (can you imagine?). And just like people who ‘try’ Netflix, I never unregistered. I lazily stayed the course and tried getting my money’s worth, spending more time online. At the same time ISPs, small mom and pop type shops were renting off parts of a Fractional T-1 leased line they owned, putting up modem pools and started selling access to the “Internet”. Nobody knew why you would want to do that with all teh kewl thingz one could do on AOL. Shopping, Chat Rooms, News, Stock quotes. It was ‘like’ the Internet. But not open and free and limitless like the Internet. And that’s where the failure begins to occur.

    AOL had to police it’s population, enforce some codes of conduct. They could kick you off, stop accepting your credit card payments. One could not be kicked of the ‘Internet’ in the same way, especially in those early days. But getting back to Jon’s point about filters that fail and allow you to see the whole world, discover an opposing viewpoint or better mulitple opposing viewpoints. That is the promise of the Internet, and we’re seeing less and less of it as we corral ourselves into our favorite brand name social networking community. I skipped MySpace, but I did jump on Flickr, and eventually Facebook. And in so doing gave up a little of that wildcat freedom and frontier-like experience of  dial-up over PPP or SLIP connection to a modem pool, doing a search first on Yahoo, then AltaVista, and then Google to find the important stuff.

    Enhanced by Zemanta
  • My First Original Arduino Project: What I Learned About Learning

    Very nice write-up on a first time Arduino project. With a good demo video at the very end. Highly recommended.

  • nVidia Gsync video scalar on the horizon

    Image representing NVidia as depicted in Crunc...
    Image via CrunchBase

    http://www.eetimes.com/author.asp?section_id=36&doc_id=1320783

    nVidia is making a new bit of electronics hardware to be added to LCD displays made by third party manufacturers. The idea is to send syncing data to the display to let it know when a frame is rendered by the 3D video hardware on the video card. Having this bit of extra electronics will smooth out the high rez/high frame rate games played by the elite desktop game players.

    It would be cool to also see this adopted for the game console markets as well, meaning TV manufacturers could also use this same idea and make your PS4 and XBox One play smoother as well. It’s a chicken and egg situation though, where unless someone like Steam or another manufacturer tries to push this out to a wider audience, it will get stuck as a niche product for the higher of the end of the high end PC desktop gamers. But it is definitely a step in the right direction and helps push us further away from the old VGA standard from some years ago. Video cards AND displays should both be smart those no reason, no excuse to not have them both be somewhat more aware of their surroundings and coordinate things. And if AMD decide they too need this capability, how soon after that will both AMD and nVidia have to come to the table and get a standard going? I hope that would happen sooner rather than later and that too would possibly drive this technology to a wider audience.

    Enhanced by Zemanta
  • There’s something rotten in the state of online video streaming, and the data is starting to emerge

    Will follow-up with a commentary at some point in the coming weeks. We’re now seeing the rotten fruits of the lack of Network Neutrality.

    Stacey Higginbotham's avatarGigaom

    If you’ve been having trouble with your Netflix streams lately, or maybe like David Rafael, director of engineering for a network security company in Texas, you’re struggling with what appears to be throttled bandwidth on Amazon Web Services, you’re not alone.

    It’s an issue I’ve been reporting on for weeks to try to discover the reasons behind what appears to be an extreme drop on broadband throughput for select U.S. internet service providers during prime time. It’s an issue that is complicated and shrouded in secrecy, but as consumer complaints show, it’s becoming increasingly important to the way video is delivered over the internet.

    The problem is peering, or how the networks owned and operated by your ISP connect with networks owned and operated by content providers such as Amazon or Netflix as well as transit providers and content deliver networks. Peering disputes have been occurring for…

    View original post 2,207 more words

  • Follow-Up – EETimes on SanDisk UltraDIMMs

    Image representing IBM as depicted in CrunchBase
    Image via CrunchBase

    http://www.eetimes.com/document.asp?doc_id=1320775

    “The eXFlash DIMM is an option for IBM‘s System x3850 and x3950 X6 servers providing up to 12.8 TB of flash capacity. (Although just as this story was being written, IBM announced it was selling its x86 server business to Lenovo for $2.3 billion).”

    Sadly it seems the party is over before it even got started in the sales and shipping of UltraDIMM equipped IBM x86 servers. If Lenovo snatches up this product line, I’m sure all the customers will still be perfectly happy but I worry about that level of innovation and product testing that led to the introduction of UltraDIMM may be slowed.

    I’m not criticizing Lenovo for this, they have done a fine job taking over the laptops and desktop brand from IBM.  The motivation to keep on creating new, early samples of very risky and untried technologies seems to be more IBM’s interest in maintaining it’s technological lead in the data center. I don’t know how Lenovo figures into that equation. How much will Lenovo sell in the way of rackmount servers like the X6 line? And just recently there’s been rumblings that IBM wants to sell off it’s long history of doing semi-conductor manufacturing as well.

    It’s almost too much to think R&D would be given up by IBM in semi-conductors. Outside of Bell Labs, IBM’s fundamental work in this field brought things like silicon on insulator, copper interconnects and myriad other firsts to ever smaller, finer design rules. While Intel followed it’s own process R&D agenda, IBM went its own way too always trying to find advantage it’s in inventions. Albeit that blistering pace of patent filings means they will likely never see all the benefits of that Research and Development. At best IBM can only hope to enforce it’s patents in a Nathan Myhrvold like way, filing law suits on all infringers, protecting it’s intellectual property. That’s going to be a sad day for all of us who marveled at what they demoed, prototyped and manufactured. So long IBM, hello IBM Global Services.

    Enhanced by Zemanta
  • M00Cs! and the Academy where the hype meets the roadway

    Crowd in Willis Street, Wellington, awaiting t...
    Crowd in Willis Street, Wellington, awaiting the results of the 1931 general election, 1931 (Photo credit: National Library NZ on The Commons)

    http://campustechnology.com/articles/2014/01/27/inside-the-first-year-data-from-mitx-and-harvardx.aspx – Campus Technology

    “While 50 percent of MOOC registrants dropped off within a week or two of enrolling, attrition rates decreased substantially after that window.”

    So with a 50% attrition rate everyone has to keep in mind those overwhelmingly large enrollment are not representative of the typical definition of the word “student”. They are shopping. They are consumers who once they find something is not to their taste whisk away to the next most interesting thing. Hard to say what impact this has on people “waiting in line” if there’s a cap on total enrollees. Typically though the unlimited enrollment seems to be the norm for this style of teaching as well as unlimited in ‘length of time’. You can enroll/register after the course has completed. That however throws off the measurements of dropping out as the registration occurs outside the time of the class actively being conducted. So there’s still a lot of questions that need to be answered. More experiments designed to factor out the idiosyncracies of these open fora online.

    There is an interesting Q&A interview after the opening summary in this article talking with one of the primary researchers on MOOCs, Andrew Ho, from the Harvard Graduate School of Education. It’s hard to gauge “success” or to get accurate demographic information to help analyze the behavior of some MOOC enrollees. The second year of the experiments will hopefully yield better results, something like conclusions should be made after the second round. But Ho emphasizes we need more data from a wider sampling than just Harvard and MIT, that will confirm or help guide further research in the large scale, Massive Online Open Course (MOOC). As the cliché goes, the jury is still out on the value add of offering real college courses in the MOOC format.

    Enhanced by Zemanta
  • IBM Goes Modular And Flashy With X6 Systems – Timothy Prickett Morgan

    The memory channel storage modules were developed by SanDisk in conjunction with Diablo Technologies, and are called UltraDIMM by SanDisk. The modules put flash memory created by SanDisk (which has a flash partnership with Toshiba) that has a SATA interface on a memory stick. Diablo has created a chipset that converts the SATA protocol to the DDR3 main memory protocol, and SanDisk has created a driver for a server BIOS that makes it look like a normal disk storage device to the system and its operating system. (Enterprise Tech – Timothy Prickett Morgan)
    Image representing Diablo Technologies as depi...
    Image by None via CrunchBase

    Big news, big news coming to a server near you. A new form factor Flash Memory product has been secretly developed and is being sampled by folks out East in the High Frequency Stock Trading market (the top of the food chain in IT needs for latency speed of transactions). Timothy Prickett Morgan (formerly of The Register) has included details from IBM‘s big annoouncement of its Intel based X6 series servers. This new form factor is the result of a memory controller made by Diablo Technologies. SanDisk has engineered the actual final product that ties the memory into the Diablo designed memory controller. However this product is not available on the open market and has been going through sampling and testing with possible high end end users and customers who have need for such a large, high speed product in a DDR DRAM memory module. Sizes, and speeds are pretty large all around. The base modules come in 200GB or 400GB form factors and fit a typical DDR-3 DIMM module. IBM and SanDisk have done a number of special tweaks on the software/firmware to pull the most I/O with the lowest latency out of these modules when installed on an X6 server. The first-gen X6 will have roughly 12 DIMM slots available with some DRAM and Ultra-DIMMs populating those slots. However things get REALLY interesting when the second-gen X6 hits the market. IBM will be doubling the amount of DIMM slots to 24 and will be upping the core count available on the 4U top of the line x6 server. When that product hits the market the Ultra-DIMM will be able to populate the majority of the DIMM slots and really start to tear it up I think when it comes to I/O and analytics processing. SanDisk is the exclusive supplier, manufacturer and engineering outfit for this product for IBM with no indication yet of when/if they would ever sell it to another OEM server manufacturer.

    Given the promise this technology has and that an outfit like Diablo Technologies is vaugely reminiscent of an upstart like SandForce who equally upset the Flash Memory market about 6 years ago, we’re likely to see a new trend. SATA SSDs are still slowly creeping into the consumer market, PCIe Flash memory products are being adopted by the top end consumer market (Apple’s laptops and the newest desktops). Now we’ve got yet another Flash memory product that could potentiall sweep the market the Ultra-DIMM. It will however take some time and some competing technology to help push this along (SandForce was the only game in town early on and early adopters help subsidize the late adopters with higher prices). Given how pared back and stripped down DIMM slots are generally in the consumer market it may be a while before we see any manufacturers attempt to push Ultra-DIMM as a consumer product. Same goes for the module sizes as they are shipped today. Example: the iMac 27″, Apple has gone from being easily upgraded (back in the Silver Tower, G4 CPU days) to nearly not upgradeable (MacBook Air) and the amount of space needed in their cases to allow for addition or customization through an Ultra-DIMM add-on would be severly constrained. It might be something that could be added as a premium option for the newest Mac Pro towers. And even then that’s very hopeful and wishful thinking on my part. But who knows how quickly this new form factor and memory controller design will infiltrate the computer market? It is seemingly a better moustrap in the sense of the boost one sees in performance on a more similar, more commoditized Intel infrastructure. Wait and see what happens.

    Enhanced by Zemanta
  • Leaked Intel roadmap’ promises… er, gear that could die after 7 months [Chris Mellor for theregister.com]

    Image representing Intel as depicted in CrunchBase
    Image via CrunchBase

     

    Chris Mellor – The Register http://www.theregister.co.uk/2013/12/09/intel_ssd_roadmappery/

     

    Chris does a quick write-up of a leaked SSD roadmap from Intel. Seems like we’re now in a performance plateau on the consumer/business end of the scale for SATA based SSD drives. I haven’t seen an uptick in Read/Write performance in a long time. Back in the heady days of OCZ/Crucial/SanDisk releasing new drives with new memory controllers on a roughly 6 month schedule, speeds slowly marched up the scale until we were seeing 200MB-Read/150MB-Write (equalling some of the fastest magnetic hard drives at the time). Then yowza, we blew right past that performance figure to 250MB/sec-275MB/sec-and higher. Intel vs. Samsung for the top speed champions at this point. SandForce was helping people enter the market at acceptable performance levels (250/200). Prices were not really edging downward, but speeds kept going up, up, up.

     

    Now we’re in the PCIe era, with everyone building their own custom design for a particular platform, make and model. Apple’s using their own design PCIe SSDs for their laptops and soon for the Mac Pro desktop workstations. One or two other manufacturers are adpating m2 sized Memory devices as PCIe add-in cards for different ultra-lightweight designs. But there’s no wave of the equivalent aftermarket, 3rd party SSDs we saw when SATA drives were king. So now we’re left with a very respectable, and still somewhat under-utilized SATA SSD market with speeds in the 500/Less than 500 Read/Write speed range. Until PCIe starts to converge, consolidate and come up with a common form factor (card size, pin out, edge connector) we’ll be seeing a long slow commoditization of SATA SSD drives with the lucky few spinning their own PCIe products. Hopefully there will be an upset and someone will form up a group to support PCIe SSD mezzanine or expansion slot EVERYWHERE. When that time comes, we’ll get the second wave of SSD performance I think we all are looking for.

     

     

     

    Enhanced by Zemanta
  • OCZ sells out to Toshiba (it’s been good to know yuh’)

    OCZ Technology
    OCZ Technology (Photo credit: Wikipedia)

    http://www.theregister.co.uk/2013/12/03/toshiba_buys_ocz/

    Seems like it was only two years ago when OCZ bought out memory controller and intellectual property (IP) holder Indilinx for it’s own branded SSD products. At the time everyone was buying SandForce memory controllers to keep up with the Joneses. Speed-wise and performance-wise SandForce was king. But with so many competitors about using the same memory controller there was no way to make a profit with a commodity technology. The thought was generally performance isn’t always the prime directive regarding SSDs. Going forward, price would be much more important. Anyone owning their own Intellectual Property wouldn’t have to pay license fees to companies like SandForce to stay in the business. So OCZ being on a wild profitable tear, bought out Indilinx a designer of NAND/Flash memory controllers. The die was cast and OCZ was in the drivers seat, creating the the Consumer market for high performance lower cost SSD drives. Market value went up and up, whispers were reported of a possible buy out of OCZ from larger hard drive manufacturers. The price of $1Billion was also mentioned in connection with this.

    Two years later, much has changed. There’s been some amount of shift in the market from 2.5″ SATA drives to smaller and more custome designs. Apple jumped from SATA to PCIe with its MacBook Air just this past Fall 2013. The m2 form factor is really well liked in the tablet and lightweight laptop sector. So who knew OCZ was losing it’s glamor to such a degree that they would sell? And not just at the level of 10x cheaper than their hightest profile price from 2 years ago. No, not 10x, but more likely 100x cheaper that what they would have asked for 2 years ago. Two whole orders of magnitude less, very roughly, exactly 35Million dollars along with a large number of negotiated guarantees to keep the support/warranty system in place and not tarnish the OCZ brand (for now). This story is told over and over again to entrpreneurs and magnate wannabees. Sell, sell, sell. No harm in that. But just make sure you’re selling too early rather than too late.