Category: technology

General technology, not anything in particular

  • Extending SSD’s Lifespan

    Glad to help out. I think SSDs are the first big thing in a while helping speed up desktop computers. After Intel came out with the i-series CPUs and the hard drives hit 4TB, things have been changing very slowly and incrementally. So SSD at least is giving people some extra speed boost. But with new tech, comes new problems. Like lifespan…

    markobroz's avatarcheapchipsmemory

    Thanks to my fellow blogger Carpetbomberz, I now have something to write. Thanks, mate!

    Well, today I will go around talking about SSDs.

    As a PC owner, I know for a fact that you experienced quite a number of problems on this department. On my case, there are instances when my PC can’t read the SSD and it is volatile to “Freezing” and became unresponsive which lead to its eventual downfall. Most PCs use SSD as a main storage that is why it is a big problem if it will be rendered useless, not to mention the files and data stored that will be forever lost. SSD is a good find but there are downsides to all these. The cost of the device is definitely a problem too that is why you have to make use of it to your full advantage.

    Thankfully, there are way to extend it’s…

    View original post 101 more words

  • Group Forms to Drive NVDIMM Adoption | EE Times

    English: flash memory
    English: flash memory (Photo credit: Wikipedia)

    As NAND flash is supplemented over the next few years by new technologies with improved durability and the same performance as system memory, “we’ll be able to start thinking about building systems where memory and storage are combined into one entity,” he said. “This is the megachange to computer architecture that SNIA is looking at now and preparing the industry for when these new technologies happen.”

    via Group Forms to Drive NVDIMM Adoption | EE Times.

    More good news on the Ultradimm, non-volatile DIMM front, a group is forming to begin setting standards for a new form factor. To day SanDisk  are the only company known to have architected and manufactured a shipping non-volatile DIMM memory product and then under contract only to IBM for the X 6 Intel-based server line. SanDisk is not shipping this or under contract to make this to anyone else by all reports, but that’s not keeping its competitors from getting a new product into heavy sample and QA testing. We might begin seeing a rush of different products, with varying interconnects and form factors all of which claim to plug-in to a typical RAM DIMM slot on an Intel based motherboard. But as the article on the IBM Ultradimm indicates this isn’t simple 1:1 swap out of DIMMs for Ultradimms. You need heavy lifting and revisions done on firmware/bios level to take advantage of the Ultradimms populating your DIMM slots on the motherboard. This is not easy, nor is it cheap and as far as OS support goes, you may need to see if your OS of choice will also help speed the plow by doing caching, loading and storing of memory differently once it’s become “aware” of the Ultradimms on the motherboard.

    Without the OS and firmware support you would be wasting your valuable money and time trying to get a real boost of using the Ultradimms off the shelf in your own randomly chosen Intel based servers. IBM’s X6 line is just hitting the market and has been sampled by some heavy hitting real-time financial trading data centers to double-check that claims made about speed and performance. IBM’s used this period to really make sure the product makes a difference worth whatever they plan on charging as a premium for the Ultradimm on customized orders for the X6. But knowing further down the line a group is at least attempting to organize and set standards means this can become a competitive market for a new memory form factor and EVERYONE may eventually be able to buy something like an Ultradimm if they need it for their data center server farm. It’s too early to tell where this will lead, but re-using the JEDEC DIMM connection interface is a good start. If Intel wanted to help accelerate this, their onboard memory controllers could also become less DRAM specific and more generalized as a memory controller for anything plugged into the DIMM slots on the motherboard. That might prove the final step in really opening the market for a wave of Ultradimm designers and manufacturers. Keep an eye on Intel and see where their chipset architecture and more specifically their memory controller road maps lead for future support of NVDIMM or similar technologies.

     

    Enhanced by Zemanta
  • AnandTech | The Pixel Density Race and its Technical Merits

    Italiano: Descrizione di un pixel
    Italiano: Descrizione di un pixel (Photo credit: Wikipedia)

    If there is any single number that people point to for resolution, it is the 1 arcminute value that Apple uses to indicate a “Retina Display”.

    via AnandTech | The Pixel Density Race and its Technical Merits.

    Earlier in my job where I work, I had to try and recommend the resolution people needed to get a good picture using a scanner or a digital camera. As we know the resolution arms race knows no bounds. First in scanners then in digital cameras. The same is true now for displays. How fine is fine enough. Is it noticeable, is it beneficial? The technical limits that enforce lower resolution usually are tied to costs. For the consumer level product cost has to fit into a narrow range, and the perceived benefit of “higher quality” or sharpness are rarely enough to get someone to spend more. But as phones can be upgraded for free and printers and scanners are now commodity items, you just keep slowly migrating up to the next model for little to no entry threshold cost. And everything is just ‘better’, all higher rez, and therefore by association higher quality, sharper, etc.

    I used to quote or try to pin down a rule of thumb I found once regarding the acuity of the human eye. Some of this was just gained  by noticing things when I started out using Photoshop and trying to print to Imagesetters and Laser Printers. At some point in the past someone decided 300 dpi is what a laser printer needed in order to reproduce text on letter size paper. As for displays, I bumped into a quote from an IBM study on visual acuity that indicated the human eye can discern display pixels in the 225 ppi range. I tried many times to find the actual publication where that appears so I could site it. But no luck, I only found it as a footnote on a webpage from another manufacturer. Now in this article we get more stats on human vision, much more extensive than that vague footnote all those years ago.

    What can one conclude from all the data in this article? Just the same thing, that resolution arms races are still being waged by manufacturers. This time however it’s in mobile phones, not printers, not scanners, not digital cameras. Those battles were fought and now there’s damned little product differentiation. Mobile phones will fall into that pattern and people will be less and less Apple fanbois or Samsung fanbois. We’ll all just upgrade to a newer version of whatever phone is cheap and expect to always have the increased spec hardware, and higher resolution, better quality, all that jazz. It is one more case where everything old is new again. My suspicion is we’ll see this happen when a true VR goggle hits the market with real competitors attempting to gain advantage with technical superiority or more research and development. Bring on the the VR Wars I say.

    Enhanced by Zemanta
  • Jon Udell on filter failure

    Jon Udell
    Jon Udell (Photo credit: Wikipedia)

    It’s time to engineer some filter failure

    Jon’s article points out his experience of the erosion of serendipity or at least opposing view points that social media enforces (somewhat) accidentally. I couldn’t agree more. One of the big promises of the Internet was that it was unimaginably vast and continuing to grow. The other big promise was that it was open in the way people could participate. There were no dictats or proscribed methods per se, but etiquette at best. There were FAQs to guide us, and rules of thumb to prevent us from embarrassing ourselves. But the Internet, It was something so vast one could never know or see everything that was out there, good or bad.

    But like the Wild est, search engines began fencing in the old prairie. At once both allowing us to get to the good stuff and waste less time doing important stuff. But therein lies the bargain of the “filter”, giving up control to an authority to help you do something with data or information. All the electrons/photons whizzing back and forth on the series of tubes exisiting all at once, available (more or less) all at once. But now with Social Neworks, like AOL before we suffer from the side effects of the filter.

    I remember being an AOL member, finally caving in and installing the app from some free floppy disk I would get in the mail at least once a week. I registered my credit card for the first free 20 hours (can you imagine?). And just like people who ‘try’ Netflix, I never unregistered. I lazily stayed the course and tried getting my money’s worth, spending more time online. At the same time ISPs, small mom and pop type shops were renting off parts of a Fractional T-1 leased line they owned, putting up modem pools and started selling access to the “Internet”. Nobody knew why you would want to do that with all teh kewl thingz one could do on AOL. Shopping, Chat Rooms, News, Stock quotes. It was ‘like’ the Internet. But not open and free and limitless like the Internet. And that’s where the failure begins to occur.

    AOL had to police it’s population, enforce some codes of conduct. They could kick you off, stop accepting your credit card payments. One could not be kicked of the ‘Internet’ in the same way, especially in those early days. But getting back to Jon’s point about filters that fail and allow you to see the whole world, discover an opposing viewpoint or better mulitple opposing viewpoints. That is the promise of the Internet, and we’re seeing less and less of it as we corral ourselves into our favorite brand name social networking community. I skipped MySpace, but I did jump on Flickr, and eventually Facebook. And in so doing gave up a little of that wildcat freedom and frontier-like experience of  dial-up over PPP or SLIP connection to a modem pool, doing a search first on Yahoo, then AltaVista, and then Google to find the important stuff.

    Enhanced by Zemanta
  • My First Original Arduino Project:  What I Learned About Learning

    My First Original Arduino Project: What I Learned About Learning

    Very nice write-up on a first time Arduino project. With a good demo video at the very end. Highly recommended.

  • nVidia Gsync video scalar on the horizon

    Image representing NVidia as depicted in Crunc...
    Image via CrunchBase

    http://www.eetimes.com/author.asp?section_id=36&doc_id=1320783

    nVidia is making a new bit of electronics hardware to be added to LCD displays made by third party manufacturers. The idea is to send syncing data to the display to let it know when a frame is rendered by the 3D video hardware on the video card. Having this bit of extra electronics will smooth out the high rez/high frame rate games played by the elite desktop game players.

    It would be cool to also see this adopted for the game console markets as well, meaning TV manufacturers could also use this same idea and make your PS4 and XBox One play smoother as well. It’s a chicken and egg situation though, where unless someone like Steam or another manufacturer tries to push this out to a wider audience, it will get stuck as a niche product for the higher of the end of the high end PC desktop gamers. But it is definitely a step in the right direction and helps push us further away from the old VGA standard from some years ago. Video cards AND displays should both be smart those no reason, no excuse to not have them both be somewhat more aware of their surroundings and coordinate things. And if AMD decide they too need this capability, how soon after that will both AMD and nVidia have to come to the table and get a standard going? I hope that would happen sooner rather than later and that too would possibly drive this technology to a wider audience.

    Enhanced by Zemanta
  • There’s something rotten in the state of online video streaming, and the data is starting to emerge

    There’s something rotten in the state of online video streaming, and the data is starting to emerge

    Will follow-up with a commentary at some point in the coming weeks. We’re now seeing the rotten fruits of the lack of Network Neutrality.

    Stacey Higginbotham's avatarGigaom

    If you’ve been having trouble with your Netflix streams lately, or maybe like David Rafael, director of engineering for a network security company in Texas, you’re struggling with what appears to be throttled bandwidth on Amazon Web Services, you’re not alone.

    It’s an issue I’ve been reporting on for weeks to try to discover the reasons behind what appears to be an extreme drop on broadband throughput for select U.S. internet service providers during prime time. It’s an issue that is complicated and shrouded in secrecy, but as consumer complaints show, it’s becoming increasingly important to the way video is delivered over the internet.

    The problem is peering, or how the networks owned and operated by your ISP connect with networks owned and operated by content providers such as Amazon or Netflix as well as transit providers and content deliver networks. Peering disputes have been occurring for…

    View original post 2,207 more words

  • Follow-Up – EETimes on SanDisk UltraDIMMs

    Image representing IBM as depicted in CrunchBase
    Image via CrunchBase

    http://www.eetimes.com/document.asp?doc_id=1320775

    “The eXFlash DIMM is an option for IBM‘s System x3850 and x3950 X6 servers providing up to 12.8 TB of flash capacity. (Although just as this story was being written, IBM announced it was selling its x86 server business to Lenovo for $2.3 billion).”

    Sadly it seems the party is over before it even got started in the sales and shipping of UltraDIMM equipped IBM x86 servers. If Lenovo snatches up this product line, I’m sure all the customers will still be perfectly happy but I worry about that level of innovation and product testing that led to the introduction of UltraDIMM may be slowed.

    I’m not criticizing Lenovo for this, they have done a fine job taking over the laptops and desktop brand from IBM.  The motivation to keep on creating new, early samples of very risky and untried technologies seems to be more IBM’s interest in maintaining it’s technological lead in the data center. I don’t know how Lenovo figures into that equation. How much will Lenovo sell in the way of rackmount servers like the X6 line? And just recently there’s been rumblings that IBM wants to sell off it’s long history of doing semi-conductor manufacturing as well.

    It’s almost too much to think R&D would be given up by IBM in semi-conductors. Outside of Bell Labs, IBM’s fundamental work in this field brought things like silicon on insulator, copper interconnects and myriad other firsts to ever smaller, finer design rules. While Intel followed it’s own process R&D agenda, IBM went its own way too always trying to find advantage it’s in inventions. Albeit that blistering pace of patent filings means they will likely never see all the benefits of that Research and Development. At best IBM can only hope to enforce it’s patents in a Nathan Myhrvold like way, filing law suits on all infringers, protecting it’s intellectual property. That’s going to be a sad day for all of us who marveled at what they demoed, prototyped and manufactured. So long IBM, hello IBM Global Services.

    Enhanced by Zemanta
  • M00Cs! and the Academy where the hype meets the roadway

    Crowd in Willis Street, Wellington, awaiting t...
    Crowd in Willis Street, Wellington, awaiting the results of the 1931 general election, 1931 (Photo credit: National Library NZ on The Commons)

    http://campustechnology.com/articles/2014/01/27/inside-the-first-year-data-from-mitx-and-harvardx.aspx – Campus Technology

    “While 50 percent of MOOC registrants dropped off within a week or two of enrolling, attrition rates decreased substantially after that window.”

    So with a 50% attrition rate everyone has to keep in mind those overwhelmingly large enrollment are not representative of the typical definition of the word “student”. They are shopping. They are consumers who once they find something is not to their taste whisk away to the next most interesting thing. Hard to say what impact this has on people “waiting in line” if there’s a cap on total enrollees. Typically though the unlimited enrollment seems to be the norm for this style of teaching as well as unlimited in ‘length of time’. You can enroll/register after the course has completed. That however throws off the measurements of dropping out as the registration occurs outside the time of the class actively being conducted. So there’s still a lot of questions that need to be answered. More experiments designed to factor out the idiosyncracies of these open fora online.

    There is an interesting Q&A interview after the opening summary in this article talking with one of the primary researchers on MOOCs, Andrew Ho, from the Harvard Graduate School of Education. It’s hard to gauge “success” or to get accurate demographic information to help analyze the behavior of some MOOC enrollees. The second year of the experiments will hopefully yield better results, something like conclusions should be made after the second round. But Ho emphasizes we need more data from a wider sampling than just Harvard and MIT, that will confirm or help guide further research in the large scale, Massive Online Open Course (MOOC). As the cliché goes, the jury is still out on the value add of offering real college courses in the MOOC format.

    Enhanced by Zemanta
  • IBM Goes Modular And Flashy With X6 Systems – Timothy Prickett Morgan

    The memory channel storage modules were developed by SanDisk in conjunction with Diablo Technologies, and are called UltraDIMM by SanDisk. The modules put flash memory created by SanDisk (which has a flash partnership with Toshiba) that has a SATA interface on a memory stick. Diablo has created a chipset that converts the SATA protocol to the DDR3 main memory protocol, and SanDisk has created a driver for a server BIOS that makes it look like a normal disk storage device to the system and its operating system. (Enterprise Tech – Timothy Prickett Morgan)
    Image representing Diablo Technologies as depi...
    Image by None via CrunchBase

    Big news, big news coming to a server near you. A new form factor Flash Memory product has been secretly developed and is being sampled by folks out East in the High Frequency Stock Trading market (the top of the food chain in IT needs for latency speed of transactions). Timothy Prickett Morgan (formerly of The Register) has included details from IBM‘s big annoouncement of its Intel based X6 series servers. This new form factor is the result of a memory controller made by Diablo Technologies. SanDisk has engineered the actual final product that ties the memory into the Diablo designed memory controller. However this product is not available on the open market and has been going through sampling and testing with possible high end end users and customers who have need for such a large, high speed product in a DDR DRAM memory module. Sizes, and speeds are pretty large all around. The base modules come in 200GB or 400GB form factors and fit a typical DDR-3 DIMM module. IBM and SanDisk have done a number of special tweaks on the software/firmware to pull the most I/O with the lowest latency out of these modules when installed on an X6 server. The first-gen X6 will have roughly 12 DIMM slots available with some DRAM and Ultra-DIMMs populating those slots. However things get REALLY interesting when the second-gen X6 hits the market. IBM will be doubling the amount of DIMM slots to 24 and will be upping the core count available on the 4U top of the line x6 server. When that product hits the market the Ultra-DIMM will be able to populate the majority of the DIMM slots and really start to tear it up I think when it comes to I/O and analytics processing. SanDisk is the exclusive supplier, manufacturer and engineering outfit for this product for IBM with no indication yet of when/if they would ever sell it to another OEM server manufacturer.

    Given the promise this technology has and that an outfit like Diablo Technologies is vaugely reminiscent of an upstart like SandForce who equally upset the Flash Memory market about 6 years ago, we’re likely to see a new trend. SATA SSDs are still slowly creeping into the consumer market, PCIe Flash memory products are being adopted by the top end consumer market (Apple’s laptops and the newest desktops). Now we’ve got yet another Flash memory product that could potentiall sweep the market the Ultra-DIMM. It will however take some time and some competing technology to help push this along (SandForce was the only game in town early on and early adopters help subsidize the late adopters with higher prices). Given how pared back and stripped down DIMM slots are generally in the consumer market it may be a while before we see any manufacturers attempt to push Ultra-DIMM as a consumer product. Same goes for the module sizes as they are shipped today. Example: the iMac 27″, Apple has gone from being easily upgraded (back in the Silver Tower, G4 CPU days) to nearly not upgradeable (MacBook Air) and the amount of space needed in their cases to allow for addition or customization through an Ultra-DIMM add-on would be severly constrained. It might be something that could be added as a premium option for the newest Mac Pro towers. And even then that’s very hopeful and wishful thinking on my part. But who knows how quickly this new form factor and memory controller design will infiltrate the computer market? It is seemingly a better moustrap in the sense of the boost one sees in performance on a more similar, more commoditized Intel infrastructure. Wait and see what happens.

    Enhanced by Zemanta