Category: technology

General technology, not anything in particular

  • The technical aspects of privacy – O’Reilly Radar

    Image representing Edward Snowden as depicted ...
    Image via CrunchBase

    The first of three public workshops kicked off a conversation with the federal government on data privacy in the US.

    by Andy Oram | @praxagora

    via The technical aspects of privacy – O’Reilly Radar.

    Interesting topic covering a wide range of issues. I’m so happy MIT sees fit to host a set of workshops on this and keep the pressure up. But as Andy Oram writes, the whole discussion at MIT was circumscribed by the notion that privacy as such doesn’t exist (an old axiom from ex-CEO of Sun Microsystems, Scott McNealy).

    No one at that MIT meeting tried to advocate for users managing their own privacy. Andy Oram mentions Vendor Relationship Management movement (thanks to Doc Searls and his Clue-Train Manifesto) as one mechanism for individuals to pick and choose what info and what degree the info is shared out. People remain willfully clueless or ignorant of VRM as an option when it comes to privacy. The shades and granularity of VRM are far more nuanced than the bifurcated/binary debate of Privacy over Security. and it’s sad this held true for the MIT meet-up as well.

    Jon Podesta’s call-in to the conference mentioned an existing set of rules for electronic data privacy, data back to the early 1970s and the fear that mainframe computers “knew too much” about private citizens known as Fair Information Practices:  http://epic.org/privacy/consumer/code_fair_info.html (Thanks to Electronic Privacy Information Center for hosting this page). These issues seem to always exist but in different forms at earlier times. These are not new, they are old. But each time there’s  a debate, we start all over like it hasn’t ever existed and it has never been addressed. If the Fair Information Practices rules are law, then all the case history and precedents set by those cases STILL apply to NSA and government surveillance.

    I did learn one new term from reading about the conference at MIT, Differential Security. Apparently it’s very timely and some research work is being done in this category. Mostly it applies to datasets and other similar big data that needs to be analyzed but without uniquely identifying an individual in the dataset. You want to find out efficacy of a drug, without spilling the beans that someone has a “prior condition”. That’s the sum effect of implementing differential privacy. You get the query out of the dataset, but you never once know all the fields of the people that make up that query. That sounds like a step in the right direction and should honestly apply to Phone and Internet company records as well. Just because you collect the data, doesn’t mean you should be able to free-wheel through it and do whatever you want. If you’re mining, you should only get the net result of the query rather than snoop through all the fields for each individual. That to me is the true meaning of differential security.

    Enhanced by Zemanta
  • SanDisk Crams 128GB on microSD Card: A World First

    English: A 512 MB Kingston microSD card next t...
    English: A 512 MB Kingston microSD card next to a Patriot SD adapter (left) and miniSD adapter (middle). (no original description) (Photo credit: Wikipedia)

    This week during Mobile World Congress 2014, SanDisk introduced the world’s highest capacity microSDXC memory card, weighing a hefty 128 GB. That’s a huge leap in storage compared to the 128 MB microSD card launched 10 years ago.

    via SanDisk Crams 128GB on microSD Card: A World First.

    Amazing to think how small the form factor and how large the storage size has gotten with microSD format memory cards. I remember the introduction of SDXC cards and the jump from 32GB to 64GB flash SD sized cards. It didn’t take long after that before the SDXC format shrunk down to microSD format. Given the size and the options to expand the memory on certain devices (noticeably Apple is absent from this group), the size of the memory card is going to allow a lot longer timeline for the storage of pictures, music and video on our handheld devices. Prior to this, you would have needed a much larger m2 or mSATA storage card to achieve this level of capacity. You would have needed to have a tablet or a netbook to plug-in those larger memory cards.

    Now you can have 128GB at your disposal just by dropping $200 at Amazon. Once you’ve installed it on your Samsung Galaxy you’ve got what would be a complete upgrade to a much more expensive phone (especially if it was an iPhone). I also think a SDXC microSD card would lend itself for moving a large amount of data in a device like one of these hollowed out nickels: http://www.amazon.com/2gb-MicroSD-Bundle-Mint-Nickel/dp/B0036VLT28

    My interest in this would be taking a cell phone overseas and going through U.S. Customs and Immigration where it’s been shown in the past they will hold onto devices for further screening. If I knew I could keep 128GB of storage hidden in a metal coin that passed through the baggage X-ray without issue, I would feel a greater sense of security. A card this size is practically as big as the current hard drive on my home computer and work laptops. It’s really a fundamental change in the portability of a large quantity of personal data outside the series of tubes called the Interwebs. Knowing that stash could be kept away from prying eyes or casual security of hosting providers would certainly give me more peace of mind.

    Enhanced by Zemanta
  • Review: Corning’s 33-foot Optical Thunderbolt cable allows you to move your Thunderbolt devices (or Mac) far away from your desk

    I’m so happy this finally is making it to the market. The promise of Thunderbolt in the early days was that it was going to be faster than any other connector on the market. Now at long last we have the optical flavor of Thunderbolt slowly painfully making it out of development and out to manufacturing. Here now is a review of an optical Thunderbolt cable from Corning.

  • AMD Launches First ARM-based Server CPU | EE Times

    Image representing AMD as depicted in CrunchBase
    Image via CrunchBase

    In addition, AMD is planning to contribute to the Open Compute Project with a new micro-server design that utilizes the Opteron A-series, along with other architecture specifications for motherboards that Facebook helped developed called “Group Hug,” an agnostic server board design that can support traditional x86 processors, as well as ARM chips.

    via AMD Launches First ARM-based Server CPU | EE Times.

    Kudos to Facebook as they still continue support for the Open Compute project which they spearheaded some years back to encourage more widespread expertise and knowledge of large scale data centers. This new charge is to allow a pick-and-choose, best of breed kind of design whereby a CPU is not a fixed quantity but can be chosen or changed like a hard drive or RAM module. And with the motherboard firmware remaining more or less consistent regardless of the CPU chosen. This would allow mass customization based solely on the best CPU for a given job (HTTP, DNS, Compute, Storage, etc). And the spare capacity might be allowed to erode a little so that any general CPU could be somewhat more aggressively scheduled while some of it’s former, less efficient services could be migrated to more specialist mobile CPUs on another cluster. Each CPU doing the set of protocols, services it inherently does best. This flies further in the face of always choosing general compute style CPUs and letting the software do most of the heavy lifting once the programming is completed.

    Enhanced by Zemanta
  • SLC vs MLC – does it matter any more?

    SLC vs MLC – does it matter any more?

    A very good survey of the Flash Memory choices for enterprise storage. SLC was at one time the only memory technology reliable enough for Enterprise Storage. Now MLC is catching up and allowing larger drives to be purchase at the same or slightly lower price as compared to SLC versions from the same manufacturers. It’s a sign that MLC is maturing and becoming “good enough” for most uses.

    Eric Slack's avatarStorageSwiss.com – The Home of Storage Switzerland

    When an IT professional starts looking into solid state drives (SSDs) they quickly learn that flash is very different from magnetic disk drives. Flash employs a much more complex write and erase process than traditional hard disk drive (HDD) technology does, a process that impacts performance, reliability and the device’s lifespan (flash eventually wears out). To address potential concerns, vendors have historically sold different types of flash, with the more expensive SLC (Single-Level Cell) being used in more critical environments. But with advances in controller technologies, is the more economical MLC (Multi-Level Cell) good enough for the enterprise?

    SLC or MLC

    As indicated above there are actually different types of NAND flash used in storage products. SLC NAND supports two logical states enabling each cell to store one bit of data. MLC NAND, and its “enterprise” cousin eMLC, have a capacity of up to four bits per cell. This increased…

    View original post 888 more words

  • 10 Reasons OpenCL Will Change Your Design Strategy & Market Position | EE Times

    OpenCL logo
    OpenCL logo (Photo credit: Wikipedia)

    OpenCL is a breakthrough precisely because it enables developers to accelerate the real-time execution of their algorithms quickly and easily — particularly those that lend themselves to the considerable parallel processing capabilities of FPGAs (which yield superior compute densities and far better performance/Watt than CPU- and GPU-based solutions)

    via 10 Reasons OpenCL Will Change Your Design Strategy & Market Position | EE Times.

    There’s still a lot of untapped energy available with the OpenCL programming tools. Apple is still the single largest manufacturer who has adopted OpenCL through a large number of it’s products (OS and App software). And I know from reading about super computing on GPUs that some large scale hybrid CPU/GPU computers have been ranked worldwide (the Chinese Tiahne being the first and biggest example). This article from EETimes encourages anyone with a brackground in C programming to try and give it a shot, see what algorithms could stand to be accelerated using the resources on the motherboard alone. But being EETimes they are also touting the benefits of using FPGAs in the mix as well.

    To date the low-hanging fruit for desktop PC makers and their peripheral designers and manufacturers has been to reuse the GPU as massively parallel co-processor where it makes sense. But as the EETimes writer emphasizes, FPGAs can be equal citizens too and might further provide some more flexible acceleration. Interest in the FPGA as a co-processor for desktop to higher end enterprise data center motherboards was brought to the fore by AMD back in 2006 with the Torrenza cpu socket. The hope back then was that giving a secondary specialty processor (at the time an FPGA) might prove to be a market no one had addressed up to that point. So depending on your needs and what extra processors you might have available on your motherboard, OpenCL might be generic enough going forward to get a boost from ALL the available co-processors on your motherboard.

    Whether or not we see benefits at the consumer level desktop is very dependent on the OS level support for OpenCL. To date the biggest adopter of OpenCL has been Apple as they needed an OS level acceleration API for video intensive apps like video editing suites. Eventually Adobe recompiled some of its Creative Suite apps to take advantage of OpenCL on MacOS. On the PC side Microsoft has always had DirectX as its API for accelerating any number of different multimedia apps (for playback, editing) and is less motivated to incorporate OpenCL at the OS level. But that’s not to say a 3rd party developer who saw a benefit to OpenCL over DirectX couldn’t create their own plumbing and libraries and get a runtime package that used OpenCL to support their apps or anyone who wanted to license this as part of a larger package installer (say for a game or for a multimedia authoring suite).

    For the data center this makes way more sense than for the desktop, as DirectX isn’t seen as a scientific computing or means of allowing a GPU to be used as a numeric accelerator for scientific calculations. In this context, OpenCL might be a nice, open and easy to adopt library for people working on compute farms with massive numbers of both general purpose cpus and GPUs handing off parts of a calculation to one another over the PCI bus or across CPU sockets on a motherboard. So everyone’s needs are going to vary and widely vary in some cases. But OpenCL might help make that variation more easily addressed by having a common library that would allow one to touch all the co-processors available when a computation is needing to be sped up. So keep an eye on OpenCL as a competitor to any GPGPU style API and library put out by either nVidia or AMD or Intel. OpenCL might help people bridge differences between these different manufacturers too.

    Image representing AMD as depicted in CrunchBase
    Image via CrunchBase
    Enhanced by Zemanta
  • MLC vs. SLC – Podcast

    Definitely worth checking this out if you’re a solutions architect and spec’ing new hardware for a project going into a data center. I’ll be listening that’s for sure.

    Charlie Hodges's avatarStorageSwiss.com – The Home of Storage Switzerland

    Does the difference between MLC vs SLC matter anymore? Storage Switzerland Senior Analyst Eric Slack and I about his latest article on MLC and SLC and how manufacturers are working to make MLC more acceptable in the Enterprise.

    To read Eric’s report, click on the picture below.

    Link to Eric Slack's new report on MLC vs SLC. Link to Eric Slack’s new report on MLC vs SLC.

    View original post

  • Seagates LaCie touts a 25TB not a typo box o disks for your DESK • The Register

    Image of a dismantled Seagate ST-225 harddisk....
    Image of a dismantled Seagate ST-225 harddisk. 5¼″ MFM harddisk with a stepper actuator. Technical Data: Capacity: 21.4 MB Speed: 3600 rpm Average Seek Time: 65 ms Heads: 4 (Photo credit: Wikipedia)

    Seagate subsidiary LaCie has launched a set of external storage boxes using a 5TB Seagate hard drive – even though disk maker Seagate hasn’t officially launched a 5TB part.

    via Seagates LaCie touts a 25TB not a typo box o disks for your DESK • The Register.

    There isn’t a whole lot in the way of activity when it comes to new designs and advances in spinning magnetic hard drives these days. The capacity wars have plateau’d around 4TB or so. The next big threshold to cross is either Shingled recording or HAMR (which uses a laser to heat the surface just prior to a write being committed to the disk). Due to the technical advances required and the adoption by a slightly smaller field of manufacturers (there’s not as many here as there was a while ago) the speed at which higher density devices hit the market has slowed. We saw 1TB and 2TB quickly show up one after the other, but slowly eventually the 3TB and 4TB drives followed. And usually they were priced at the high end premium part of the market. Now Seagate has stitched together a 5TB drive and LaCie is rushing it into a number of its own desktop and pro-sumer level products.

    The assumption for now is Seagate has adopted the shingled recording method (which folds writing of blocks of data in an overlapping pattern to increase the density). We’ll see how well that design decision performs over the coming months as the early adopters and fanbois needing each and every last terabyte of storage they can get for their game roms, warez and film/music collections.

    Enhanced by Zemanta
  • The Memory Revolution | Sven Andersson | EE Times

    A 256Kx4 Dynamic RAM chip on an early PC memor...
    A 256Kx4 Dynamic RAM chip on an early PC memory card. (Photo by Ian Wilson) (Photo credit: Wikipedia)

    In almost every kind of electronic equipment we buy today, there is memory in the form of SRAM and/or flash memory. Following Moores law, memories have doubled in size every second year. When Intel introduced the 1103 1Kbit dynamic RAM in 1971, it cost $20. Today, we can buy a 4Gbit SDRAM for the same price.

    via The Memory Revolution | Sven Andersson | EE Times

    Read now, a look back from an Ericsson engineer surveying the use of solid state, chip-based memory in electronic devices. It is always interesting to know how these things start and evolved over time. Advances in RAM design and manufacture are the quintessential example of Moore’s Law even more so than the advances in processors during the same time period. Yes CPUs are cool and very much a foundation upon which everything else rests (especially dynamic ram storage). But remember this Intel didn’t start out making microprocessors, they started out as a dynamic RAM chip company at a time that DRAM was just entering the market. That’s the foundation upon which even Gordon Moore knew the rate at which change was possible with silicon based semiconductor manufacturing.

    Now we’re looking at mobile smartphone processors and System on Chip (SoC) advancing the state of the art. Desktop and server CPUs are making incremental gains but the smartphone is really trailblazing in showing what’s possible. We went from combining the CPU with the memory (so-called 3D memory) and now graphics accelerators (GPU) are in the mix. Multiple cores and soon fully 64bit clean cpu designs are entering the market (in the form of the latest model iPhones). It’s not just a memory revolution, but it is definitely a driver in the market when we migrated from magnetic core memory (state of the art in 1951-52 while developed at MIT) to the Dynamic RAM chip (state of the art in 1968-69). That drive to develop the DRAM brought all other silicon based processes along with it and all the boats were raised. So here’s to the DRAM chip that helped spur the revolution. Without those shoulders, the giants of today wouldn’t be able to stand.

    Enhanced by Zemanta
  • Google announces Project Tango, augmented reality for all

    Google announces Project Tango, augmented reality for all

    I hope they work directly with the Google Glass group and turn it into a “suite” of freindly interoperable pieces and components. That would be a big plus. VR or AR doesn’t matter to me, I just want the augmentation to be real, and useful.

    vrzonesg's avatarTech News for Geeks

    Google officially unveils project Tango, an initiative that seeks to utilize smartphones to build upon Google’s already dominating mapping empire.

    Project Tango, a project that involves ‘specialists’ from the world over, will put the ability to create augmented reality data into the hands of co…

    Read more: http://vr-zone.com/articles/google-announces-project-tango-augmented-reality/72419.html

    View original post