Blog

  • Showcase Your Skills & Analyze Which Skills Are Trending With LinkedIn’s New Tool

    Image representing LinkedIn as depicted in Cru...
    Image via CrunchBase

    Professional network LinkedIn has just introduced the beta launch of a new feature LinkedIn Skills, a way for you to search for particular skills and expertise, and of course, showcase your own and in LinkedIn’s words, “a whole new way to understand the landscape of skills & expertise, who has them, and how it’s changing over time.”

    via Showcase Your Skills & Analyze Which Skills Are Trending With LinkedIn’s New Tool.

    It may not seem that important at first, especially if people don’t keep their profiles up to date in LinkedIn. However, for the largest number of ‘new’ users that are in the job market actively seeking positions, I’m hoping those data are going to be more useful. Those might be worth following over time to see what demand there is for those skills in the market place. That is the promise at least. My concern though is just as grades have inflated over time at most U.S. Universities, skills too will be overstated, lied about and be very untrustworthy as people try to compete with one another on LinkedIn.

  • OpenID: The Web’s Most Successful Failure|Wired.com

    First 37Signals announced it would drop support for OpenID. Then Microsoft’s Dare Obasanjo called OpenID a failure (along with XML and AtomPub). Former Facebooker Yishan Wong’s scathing (and sometimes wrong) rant calling OpenID a failure is one of the more popular answers on Quora.

    But if OpenID is a failure, it’s one of the web’s most successful failures.

    via OpenID: The Web’s Most Successful Failure | Webmonkey | Wired.com.

    I was always of the mind that said Single Sign-on is a good thing, not bad. And any service whether it be for work or outside of work that can re-use an identifier and authentication, or whatnot should make things easier to manage and possibly be more secure in the long run. There are proponents for and against anything that looks or acts like a single sign-on. Detractors always argue that if one of the services gets hacked they somehow can gain access to your password and identity and hack in to your accounts on all the other systems out there. In reality with a typical single sign-on service you don’t ever send a password to the place your logging into (unless it’s the source of record like the website that hosts your OpenID). Instead you send something more like a scrambled message that only you could have originated and which the website you’re logging into will be able to descramble. And the message it is sending is based on your OpenID provider, the source of record for your identity online. So nobody is storing your password, nobody is able to hack into all your other accounts when they hijack your favorite web service.

    Where I work I was a strong advocate for centralized identity management like OpenID. Some people thought the only use for this was as a single sign-on service. But real centralize identity management also encompasses the authorizations you have once you have declared and authenticated your identity. And it’s the authorization that is key to what is really useful for a Single Sign-on service.

    I may be given a ‘role’ within someone’s website or page on a social networking website that either adds or takes away levels off privacy to the person who has declared me as a ‘friend’. And if they wanted to ‘redefine’ my level of privilege, all they would have to do is change privileges for that ‘role’ not for me personally and all my levels of access would change accordingly. Why? Because a role is kind off like a rank or group membership. Just like everyone in the army who is an officer can enjoy benefits like attending an officers club because they have the role, officer. I can see more of a person’s profile or personal details because I have been declared a friend. Nowhere in this is it absolutely necessary to define specific restrictions, levels of privilege to me Individually! It’s all based on my membership in a group. And if someone wants to eliminate that group or change the permissions to all members of the group, they do it once, and only once to the definition of that role, and it rolls out, cascades out to all the members after that point. So OpenID can be authentication (which is what most people stop at) and it can additionally be authorization (what am I allowed and not allowed to do once I prove who I am). It’s a very powerful and poorly understood capability.

    The widest application I’ve seen so far using something like OpenID is the Facebook ‘sign-on’ service that allows you to make comments to articles on news websites and weblogs. Disqus is a third party provider that acts as a hub to anyone that wants to re-use someone’s Facebook or OpenID credentials to prove that they are real and not a rogue spambot. That chain of identity is maintained by Disqus providing the plumbing back to whichever of the many services someone might be subscribed to or participate in. I already have an OpenID but I also have a Facebook account. Disqus will allow me to use either one. Given how much information might be passed along by Facebook through a third party (something they are notorious for allowing Applications to do) I chose to use my OpenID which more or less says I am X user at X website and I am the owner of that website as well. A chain of authentications just good enough to allow me to make comments on an article is what OpenID provides. Not too much information, just enough information travels back and forth. And because of this absolute precision, abolishing all the unneeded private detail or having to create an account on the website hosting the article, I can just freely come and go as I please.

    That is the lightweight joy of OpenID.

  • Dave Winer’s EC2 for poets | Wired.com

    Dave Winer
    Image via Wikipedia

    Winer wants to demystify the server. “Engineers sometimes mystify what they do, as a form of job security,” writes Winer, “I prefer to make light of it… it was easy for me, why shouldn’t it be easy for everyone?”

    via A DIY Data Manifesto | Webmonkey | Wired.com.

    Dave Winer believes Amazon’s Elastic Compute Cloud (EC2) is the path towards a more self reliant, self actualizing future for anyone who keeps any of their data on the Internet. So he proposes a project entitled EC2 for Poets. Having been a user of Dave’s blogging software in the past, Radio Userland, I’m very curious as to what the new project looks like.

    Back in the old days I paid $40 to Frontier for the privilege of reading and publishing my opinions on articles I subscribed to through the Radio Userland client. It was a great RSS reader at the time and I loved being able to clip and snip out bits of articles and embed my comments around them. I then subsequently moved on to Bloglines and now Google Reader exactly in that order. Now I use WordPress to keep my comments and article snippets organized and published on the Web.

  • iPod classic stock dwindling at Apple

    iPod classic
    Image by Freimut via Flickr

    Apple could potentially upgrade the Classic to use a recent 220GB Toshiba drive, sized at the 1.8 inches the player would need.

    via iPod classic stock dwindling at Apple, other retailers | iPodNN.

    Interesting indeed, it appears Apple is letting supplies run low for the iPod Classic. No word immediately as to why but there could be a number of reasons as speculated in this article. Most technology news websites understand the divide between the iPhone/Touch operating system and all the old legacy iPod devices (an embedded OS that only runs the device itself). Apple would like to consolidate its consumer products development efforts by slowly winnowing out non-iOS based ipods. However, due to the hardware requirements demanded by iOS, Apple will be hard pressed to jam such a full featured bit of software into iPod nano and iPod shuffles. So whither the old click wheel interface iPod empire?

  • Toshiba rolls out 220GB-Could the iPod Classic see a refresh?

    first generation iPod
    Image via Wikipedia

    Toshiba Storage Device Division has introduced its MKxx39GS series 1.8-inch spinning platter drives with SATA connectors.

    via Toshiba rolls out 220GB, extra-compact 1.8-inch hard drive | Electronista.

    Seeing this announcement reminded me a little of the old IBM Microdrive. A 1.8″ wide spinning disk that fit into a Compact Flash sized form factor (roughly 1.8″ square). Those drives were at the time 340MB and astoundingly dense storage format that digital photographs gravitated to very quickly. Eventually this Microdrive was improved up to around 1GByte per drive in the same small form factor. Eventually the market for this storage dried up as smaller and smaller cameras became available with larger and larger amounts of internal storage and slots for removable storage like Sony’s Memory stick format or the SD Card format. The Microdrive was also impeded by a very high cost per MByte versus other available storage by the end of its useful lifespan.

    But no one knows what new innovative products might hit the market. Laptop manufacturers continued to improve on their expansion bus known as PCMCIA, PC Card and eventually Card Bus. The idea was you could plug any kind of device you wanted into that expansion bus connect to a a dial-up network, a wired Ethernet network or a Wireless network. Card Bus was 32-bit clean and designed to be as close to the desktop PCI expansion bus as possible. Folks like Toshiba were making small hard drives that would fit the tiny dimensions of that slot, containing all the drive electronics within the Card Bus card itself. Storage size improved as the hard drive market itself improved the density of it’s larger 2.5″ and 3.5″ desktop hard drive product.

    I remember the first 5GByte Card Bus hard drive and marveling at how far folks at Toshiba and Samsung had outdistanced IBM. Followed soon after by the 10GByte drive. However just as we wondered how cool this was, Apple created a copy of a product being popularized by a company named Rio. It was a new kind of hand held music player that primarily could play back audio .mp3 files. It could hold 5GBytes of music (compared to 128MBytes and 512MBytes for most top of the line Rio products at the time). It had a slick, and very easy to navigate interface with a spinning wheel you could click down on with the thumb of your hand. Yes it was the first generation iPod and it demanded a large quantity of those little bitty hard drives Samsung and Toshiba were bringing to market.

    Each year storage density would increase and a new generation of drives would arrive. Each year a new iPod would hit the market taking advantage of the new hard drives. The numbers seemed to double very quickly. 20Gig, 30Gig-the first ‘video’ capable iPod, 40Gig,60gig,120gig and finally today the iPod Classic at a whopping 160GBytes of storage! And then the great freeze, the slowdown and transition to Flash memory based iPods which were mechanically solid state. No moving parts, no chance for mechanical failure, no loss of data and speeds unmatched by any hard drive of any size currently on the market. The Flash storage transition also meant lower power requirements, longer battery time and now for the first time the real option of marrying a cell phone with your iPod (I do know there was an abortive attempt to do this on a smaller scale with Motorola phones @ Cingular). The first two options were 4GB and 8GB iPhones using the solid state flash memory. So wither the iPod classic?

    iPod Classic is still on the market for those wishing to pay slightly less than the price for an iPod touch. You get much larger amount of total storage (video and audio both) but things have stayed put at 160GBytes for a very long time now. Manufacturers like Toshiba haven’t come out with any new product seeing the end in sight for the small 1.8″ hard drive. Samsung dropped it’s 1.8″ hard drives altogether seeing where Apple was going with its product plan. So I’m both surprised and slightly happy to see Toshiba soldier onward bringing out a new product. I’m thinking Apple should really do a product refresh on the iPod classic. They could also add iOS as a means of up-scaling and up-marketing the device to people who cannot afford the iPod Touch, leaving the price right where it is today.

  • IBM Teams Up With ARM for 14-nm Processing

    iPad, iPhone, MacBook Pro
    Big, Little & Little-est!

    Monday IBM announced a partnership with UK chip developer ARM to develop 14-nm chip processing technology. The news confirms the continuation of an alliance between both parties that launched back in 2008 with an overall goal to refine SoC density, routability, manufacturability, power consumption and performance.

    via IBM Teams Up With ARM for 14-nm Processing.

    Interesting that IBM is striking out so far away from the current state of the art processing node for silicon chips. 22nm or there abouts is the what most producers of flash memory are targeting for their next generation product. Smaller sizes mean more chips per wafer, higher density means storage sizes go up for both flash drives and SSDs without increasing in physical size (who wants to use brick sized external SSDs right?). Too, it is interesting that ARM is the partner with IBM for their farthest target yet in chip production design rule sizes. But it appears that System-on-Chip (SoC) designers like ARM are now state of the art producers of power and waste heat optimized computing. Look at Apple’s custom A4 processor for the iPad and iPhone. That chip has lower power requirements than any other chip on the market. It is currently leading the pack for battery life in the iPad (10 hours!). So maybe it does make sense to choose ARM right now as they can benefit the most and the fastest from any shrink in the size of the wire traces used to create a microprocessor or a whole integrated system on a chip. Strength built on strength, that’s a winning combination and shows that IBM and ARM have an affinity for the lower power consumption future of cell phone and tablet computing.

    But consider this also, the last article I wrote about Tilera’s product plans regarding cloud computing in a box. ARM chips could easily be the basis for much lower power, much higher density computing clouds. Imagine a GooglePlex style datacenter running ARM CPUs on cookie trays instead of commodity Intel parts. That’s a lot of CPUs and a lot less power draw, both big pluses for a Google design team working on a new data center. True, legacy software concerns might over rule a switch to lower power parts. But if the cost of electricity would offset the opportunity cost of switching to a new CPU (an having to re-compile software for the new chip) then Google would be crazy not to seize up on this.

  • Chip upstart Tilera in the news

    Diagram Of A Partial Mesh Network
    Diagram Of A Partial Mesh Network

    As of early 2010, Tilera had over 50 design wins for the use of its SoCs in future networking and security appliances, which was followed up by two server wins with Quanta and SGI. The company has had a dozen more design wins since then and now claims to have over 150 customers who have bought prototypes for testing or chips to put into products.

    via Chip upstart Tilera lines up $45m in funding • The Register.

    There’s not been a lot of news about Tilera most recently, but they are still selling products, raising funds through private investments. Their product road map is showing great promise as well. I want to see more of their shipping product get tested in the online technology website arena. I don’t care if Infoworld, Network World, Tom’s Hardware or Anandtech does it. Whether it’s security devices or actual multi-core servers it would be cool to see Tilera compared even if it was an apples and oranges type of test. On paper it appears the mesh network of Tilera’s multi-core cpus is designed to set it apart from any other product currently available on the market. Similarly the ease of accessing the cores through the mesh network is meant to make the use of a single system image much easier as it is distributed across all the cores almost invisibly. In a word Tilera and its next closest competitor SeaMicro are cloud computing in a single solitary box.

    Cloud computing for those who don’t know is an attempt to create a utility like the water system or electrical system in the town where you live. The utility has excess capacity, and what it doesn’t use it sells off to connected utility systems. So you always will have enough power to cover your immediate needs with a little in reserve for emergencies. On the days where people don’t use as much electricity you cut back on production a little or sell off the excess to someone who needs it. Now imagine that electricity is computer cycles doing additions, subtractions or longer form mathematical analysis all in parallel and scaling out to extra computer cores as needed depending on the workload. Amazon has a service they sell like this already, Microsoft too. You sign up to use their ‘compute cloud’ and load your applications, your data and just start crunching away while the meter runs. You get billed based on how much of the computing resource you used.

    Nowadays, unfortunately, in data centers you got single purpose servers doing one thing, sitting idle most of the time. This has been a going concern so much so that a whole industry has cropped up of splitting those machines into thinner slices with software like VMWare. Those little slivers of a real computer then take up all the idle time of that once single purpose machine and occupy a lot more of its resources. But you still have that full-sized, hog of an old desktop tower now sitting in a 19 inch rack, generating heat and sucking up too much power. Now it’s time to scale down the computer again and that’s where Tilera comes in with it’s multi-core, low power, mesh-networked cpus. And investment partners are rolling in as a result of the promise for this new approach!

    Numerous potential customers, venture capital outfits, and even fabrication partners are jumping in to provide a round of funding that wasn’t even really being solicited by the company. Tilera just had people falling all over themselves writing checks to get a piece of the pie before things take off. It’s a good sign in these stagnant times for startup companies. And hopefully this will buy more time for the roadmap to future cpus from the company hopefully scaling up to the 200 core cpu that would be peak achievement in this quest for high performance, low-power computing.

  • The Sandy Bridge Review: Intel Core i7-2600K – AnandTech

    Quick Sync is just awesome. Its simply the best way to get videos onto your smartphone or tablet. Not only do you get most if not all of the quality of a software based transcode, you get performance thats better than what high-end discrete GPUs are able to offer. If you do a lot of video transcoding onto portable devices, Sandy Bridge will be worth the upgrade for Quick Sync alone.

    For everyone else, Sandy Bridge is easily a no brainer. Unless you already have a high-end Core i7, this is what youll want to upgrade to.

    via The Sandy Bridge Review: Intel Core i7-2600K, i5-2500K and Core i3-2100 Tested – AnandTech :: Your Source for Hardware Analysis and News.

    Previously in this blog I have recounted stories from Tom’s Hardware and Anandtech.com surrounding the wicked cool idea of tapping the vast resources contained within your GPU while you’re not playing video games. Producers of GPUs like nVidia and AMD both wanted to market their products to people who not only gamed but occasionally ripped video from DVDs and played them back on ipods or other mobile devices. The amount of time sunk into doing these kinds of conversions were made somewhat less of a pain due to the ability to run the process on a dual core Wintel computer, browsing web pages  while re-encoding the video in the background. But to get better speeds one almost always needs to monopolize all the cores on the machine and free software like HandBrake and others will take advantage of those extra cores, thus slowing your machine, but effectively speeding up the transcoding process. There was hope that GPUs could accelerate the transcoding process beyond what was achievable with a multi-core cpu from Intel. An example is also Apple’s widespread adoption of OpenCL as a pipeline to the GPU to send rendering requests for any video frames or video processing that may need to be done in iTunes, QuickTime or the iLife applications. And where I work, we get asked to do a lot of transcoding of video to different formats for customers. Usually someone wants a rip from a DVD that they can put on a flash drive and take with them into a classroom.

    However, now it appears there is a revolution in speed in the works where Intel is giving you faster transcodes for free. I’m talking about Intel’s new Quick Sync technology using the integrated graphics core as a video transcode accelerator. The speeds of transcoding are amazingly fast and given the speed, trivial to do for anyone including the casual user. In the past everyone seemed to complain about how slow their computer was especially for ripping DVDs or transcoding the rips to smaller more portable formats. Now, it takes a few minutes to get an hour of video into the right format. No more blue Monday. Follow the link to the story and analysis from Anandtech.com as they ran head to head comparisons of all the available techniques of re-encoding/transcoding a Blue-ray video release into a smaller .mp4 file encoded in as h.264. They did comparisons of Intel four-core cpus (which took the longest and got pretty good quality) versus GPU accelerated transcodes, versus the new Intel QuickSync technology coming out soon on the Sandy Bridge gen Intel i7 cpus. It is wicked cool how fast these transcodes are and it will make the process of transcoding trivial compared to how long it takes to actually ‘watch’ the video you spent all that time converting.

    Links to older GPU accelerated video articles:

    https://carpetbomberz.com/2008/06/25/gpu-accelerated-h264-encoding/
    https://carpetbomberz.com/2009/06/12/anandtech-avivo/
    https://carpetbomberz.com/2009/06/23/vreveal-gpu/
    https://carpetbomberz.com/2010/10/18/microsoft-gpu-video-encoding-patent/

  • I’m Posting every week in 2011!

    I’ve decided I want to blog more. Rather than just thinking about doing it, I’m starting right now. I will be posting on this blog once a week for all of 2011.

    I know it won’t be easy, but it might be fun, inspiring, awesome and wonderful. Therefore I’m promising to make use of The DailyPost, and the community of other bloggers with similiar goals, to help me along the way, including asking for help when I need it and encouraging others when I can.

    If you already read my blog, I hope you’ll encourage me with comments and likes, and good will along the way.

    Signed,

    Wing Commander L.E. Pooper

  • Next-Gen SandForce Controller Seen on OCZ SSD

    Image representing SandForce as depicted in Cr...
    Image via CrunchBase

    Last week during CES 2011, The Tech Report spotted OCZ’s Vertex 3 Pro SSD–running in a demo system–using a next-generation SandForce SF-2582 controller and a 6Gbps Serial ATA interface. OCZ demonstrated its read and write speeds by running the ATTO Disk Benchmark which clearly showed the disk hitting sustained read speeds of 550 MB/s and sustained write speeds of 525 MB/s.

    via Next-Gen SandForce Controller Seen on OCZ SSD.

    Big news, test samples of the SandForce SF-2000 series flash memory controllers are being shown in products demoed at the Consumer Electronics Shows. And SSDs with SATA interfaces are testing through the roof. The numbers quoted for a 6GB/sec. SATA SSD are in the 500+GB/sec. range. Previously you would need to choose a PCIe based SSD drive from OCZ or Fusion-io to get anywhere near that high of  speed sustained. Combine this with the future possibility of SF-2000 being installed on future PCIe based SSDs and there’s no telling how much the throughput will scale. If four of the Vertex drives were bound together as a RAID 0 set with SF-2000 drive controllers managing it, is it possible to see a linear scaling of throughput. Could we see 2,000 MB/sec. on PCIe 8x SSD cards? And what would be the price on such a card fully configured with 1.2 TB of SSD drives? Hard to say what things may come, but just the thought of being able to buy retail versions of these makes me think a paradigm shift is in the works that neither Intel nor Microsoft are really thinking about right now.

    One comment on this article as posted on the original website, Tom’s Hardware, included the observation that the speeds quoted for this SATA 6GBps drive are approaching the memory bandwidth of several generations old PC-133 DRAM memory chips. And as I have said previously, I still have an old first generation Titanium Powerbook from Apple that uses that same memory chip standard PC-133. So given that SSD hard drives are fast approaching the speed of somewhat older main memory chips I can only say we are fast approaching a paradigm shift in desktop and enterprise computing. I dub thee, the All Solid State (ASS) era where no magnetic or rotating mechanical media enter into the equation. We run on silicon semiconductors from top to bottom, no Giant Magneto-Resistive technology necessary. Even our removable media are flash memory based USB drives we put in our pockets and walk around with on key chains.