Blog

  • Google Glass teardown puts rock-bottom price on hardware • The Register

    Google Glass OOB Experience 27126
    Google Glass OOB Experience 27126 (Photo credit: tedeytan)

    A teardown report on Google Glass is raising eyebrows over suggestions that the augmented reality headset costs as little as $80 to produce.

    via Google Glass teardown puts rock-bottom price on hardware • The Register.

    One more reason to not be a Glasshole is you don’t want to be a sucker. Given what the Oculus Rift is being sold for versus Google Glass, one has to ask themselves why is Glass so much more expensive? It doesn’t do low latency stereoscopic 3D. It doesn’t have special eye adapters PROVIDED depending on your eyeglass correction. Glass requires you to provided prescription lenses if you really needed them. It doesn’t have large, full color, high rez AMOLED display. So why $1500 when Rift is $350? And even the recently announced Epson Moverio is priced at $700.

    These days with the proliferation of teardown sites and the experts at iFixit and their partners at Chipworks, it’s just a matter of time before someone writes up your Bill of Materials (BOM). Once that’s hit the Interwebs and communicated widely all the business analysts and Wall Street Hedgefunders know how to predict the profit of the company based on sales. If Google retails Glass at the same price it is the development kits, it’s going to be real difficult to compete for very long given lower price and more capable alternatives. I appreciate what Google’s done making it lightweight and power efficient, but it’s still $80 in parts being sold at a mark-up of $1500. That’s the bottom line, that’s the Bill of Materials.

    Enhanced by Zemanta
  • Epson Moverio BT-200 AR Glasses In Person, Hands On

    Wikitude Augmented Reality SDK optimized for E...
    Wikitude Augmented Reality SDK optimized for Epson Moverio BT-200 (Photo credit: WIKITUDE)

    Even Moverio’s less powerful (compared to VR displays) head tracking would make something like Google Glass overheat, McCracken said, which is why Glass input is primarily voice command or a physical touch. McCracken, who has developed for Glass, said that more advanced uses can only be accomplished with something more powerful.

    via Epson Moverio BT-200 AR Glasses In Person, Hands On.

    Epson has swept in and gotten a head start on others in the smart glasses field. I think with their full head tracking system, and something like a Microsoft Xbox Kinect like projector and receiver pointed outward wherever you are looking, it might be possible to get a very realistic “information overlay”. Microsoft’s XBox Kinect has a 3D projector/scanner built-in which could potentially be another sensor built-in to the Epson glasses. The Augmented Reality apps on Moverio only do edge detection to provide the information overlay placement. If you had an additional 3D map (approximating the shapes and depth as well) you might be able to correlate the two data feeds (edges and a 3D mesh) to get a really good informational overlay at close range, normal arm’s length working distances.

    Granted the Kinect is rather large in comparison to the Epson Moverio glasses. The resolution is also geared for longer distances too. At a very short distance XBox Kinect may not quite be what you’re looking for to improve the informational overlay. But an Epson Moverio paired up with a Kinect-like 3D projector/scanner could tie into the head tracking and allow some greater degree of accurate video overlay. Check out this video for a hack to use the Kinect as a 3D scanner:

    3D Scanning with an Xbox Kinect – YouTube

    Also as the pull-quote mentions Epson has done an interesting cost-benefit analysis and decided a smartphone level CPU and motherboard were absolutely necessary for making Moverio work. No doubt that light weight and miniature size of cellphones has by itself revolutionized the mobile phone industry. Now it’s time to leverage all that work and see what “else” the super power efficient mobile cpu’s can do along with their mobile gpu counterparts. I think this sudden announcement by Epson is going to cause a tidal wave of product announcements similar to the wave following the iPhone introduction in 2007. Prior to that Blackberry and it’s pseudo smartphone were the monopoly holders in the category they created (mobile phone as email browser). Now Epson is trying to show there’s a much wider application of the technology outside of Google Glass and Oculus Rift.

    Enhanced by Zemanta
  • The Neurogrid – What It Is and What It Is Not

    More info on the Neurogrid massively parallel computer. Comparing it to other AI experiments in modeling individual neurons is apt. I compare it to Danny Hillis’s The Connection Machine (TMC) which used ~65k individual 1 bit processors to model neurons. It was a great idea, and experiment, but it never quite got very far into the commercial market.

  • Corning Announces Availability of USB 3.Optical Cables

    English: A TOSLINK fiber optic cable with a cl...
    English: A TOSLINK fiber optic cable with a clear jacket that has a laser being shone onto one end of the cable. The laser is being shone into the left connector; the light coming out the right connector is from the same laser. (Photo credit: Wikipedia)

    Currently available in lengths of 10 meters, Corning will also be releasing USB 3.Optical cables of 15 and 30 meters later this year.  These cables can be purchased online at Amazon and Accu-Tech.

    via Corning Announces Availability of USB 3.Optical Cables.

    As I’ve had to deal with using webcams stretched across very long distances in classrooms and lecture halls, a 30 meter cable can be a godsend. I’ve used 10 meter long cables with built-in extenders and even that was a big step up. Here’s hoping prices eventually come down to a reasonable price level, say below $100. I’m impressed the power can run across the same cable with the optical fiber. I assume both ends are electrical-optical converters, meaning they need to be powered. So compared to CAT-5 cables with extenders it seems pretty light weight. No need for outlets to power the extenders on both ends.

    Of course CAT-5 based extenders are still very price competitive and come in so many formats, USB 3.0 is trivial and probably more price competitive in the 30 meter range. But cable runs in CAT5 can be 50 to 100 meters for data running over TCP/IP on network switches. So CAT-5 with extenders converting to USB will still have the cost and performance advantage for some time to come.

    Enhanced by Zemanta
  • Eye Tracking With The Oculus Rift

    Now THAT is amazing, but not out of the question either. Why can’t you do eye tracking with an Oculus Rift? How much more hardware and extra calibration steps would you need to do data collection like this? Seems like a great value add for folks doing Brain and Cognitive Science research with 3D test rigs. Maybe this will open up a market for Oculus Rift as research device.

  • ‘Gods’ Make Comeback at Toyota as Humans Steal Jobs From Robots – Bloomberg

    Factory Automation with industrial robots for ...
    Factory Automation with industrial robots for metal die casting in foundry industry, robotics in metal manufacturing (Photo credit: Wikipedia)

    “We need to become more solid and get back to basics, to sharpen our manual skills and further develop them,” said Kawai, a half century-long company veteran tapped by President Akio Toyoda to promote craftsmanship at Toyota’s plants. “When I was a novice, experienced masters used to be called gods, and they could make anything.”

    via ‘Gods’ Make Comeback at Toyota as Humans Steal Jobs From Robots – Bloomberg.

    It’s not a Luddite reaction to eschew the new technology in favor of the old, not always. I still do work on my own gas powered lawn equipment. It takes me longer, I’m less skilled generally than a pro, but I learn a lot every time. And part of that learning is helping me diagnose problems or hopefully in the long run get more life out of the equipment I have. In some ways, if we don’t practice those hard won skills we become victims to the status quo. If you delegate tasks to robots because they can do them “better” and for less pay, then what you get is a high tech version of the status quo. A robot will not see the inefficiency in what it’s been tasked with doing. It’s not going to notice the room for improvement. It’s not going to suggest to the line manager, “Hey this bolt needs to be mechanically hardened, it seems like it might sheer off easily”. Same is true for the Engineers and Designers who make the production lines, if they aren’t well versed in the steps or willing to take feedback from the production line, how long before we call an end to innovation? One of the great strengths of Japanese car manufacture after World War 2 was the W. Edwards Demming method of statistical quality control. Part of that is not simply always getting zero defects out of a production line. Some of that is gaining input from the workers as the whole car is put together. If you don’t have a feedback loop, and that’s what this article is pointing to, then you’re going to be making the same car at the same price for a very, very long time. Statistical quality control implies continuous improvement not just in the product but for the experience of the workers too, including safety, and doing a great job, enjoying what you are doing. All those come into play and are lost when as much of the work as is possible is turned over to the robots. Let’s not lose those hard won skills of knowing how to be a machinist, fabricator and assembler. Let’s exercise them, and become better at what we’re downing now, and what we’re doing in the future.

    Enhanced by Zemanta
  • Revolutionary computers are on the way. Now we just need to know how to program them

    Neural network on the move now that multiple cpu cores are possible on a much smaller PC card. I’ll be keeping an eye on “NeuroGrid”.

    Derrick Harris's avatarGigaom

    A team of Stanford scientists has created a circuit board, dubbed “NeuroGrid,” consisting of 16 computing cores that simulate more than 1 million neurons and billion of synapses. They think it could be mass produced for about $400 per board, meaning it would be economically feasible to embed the boards into everything from robots to artificial limbs in order to speed up their computing cycles while significantly reducing their power consumption.

    But even if that’s possible, there would still be one big problem: Right now, NeuroGrid requires, essentially, a neuroscientist in order to program it.

    It’s arguably a bigger problem than the cost of production (although the $40,000 price tag for the prototype version would be very prohibitive), because processor architectures are nothing without people to build applications for them. We’re already used to processors and microchips embedded in many of the things we use, but most are slow, weak and power-hungry compared to…

    View original post 402 more words

  • AnandTech | Apple’s Cyclone Microarchitecture Detailed

    Image representing Apple as depicted in CrunchBase

    So for now, Cyclone’s performance is really used to exploit race to sleep and get the device into a low power state as quickly as possible.

    via AnandTech | Apple’s Cyclone Microarchitecture Detailed.

    Race to sleep, is the new, new thing for mobile cpus. Power conservation at a given clock speed is all done through parceling out a task and with more cores or higher clock speed. All cores execute and comple the task then cores are put to sleep or a much lower power state. That’s how you get things done and maintain a 10 hour battery life for an iPad Air or iPhone 5s.

    So even though a mobile processor could be the equal of the average desktop cpu, it’s the race to sleep state that is the big differentiation now. That is what Apple’s adopting of a 64bit ARM vers. 8 architecture is bringing to market, the race to sleep. At the very beginning of the hints and rumors 64bit seemed more like an attempt to address more DRAM or gain some desktop level performance capability. But it’s all for the sake of executing quick and going into sleep mode to preserve the battery capacity.

    I’m thinking now of some past articles covering the nascent, emerging market for lower power, massively parallel data center servers. 64bits was an absolute necessary first step to get ARM cpus into blades and rack servers destined for low power data centers. Memory addressing is considered a non-negotiable feature that even the most power efficient server must have. Didn’t matter what CPU it is designed around, memory address HAS got to be 64bits or it cannot be considered. That rule still applies today and will be the sticking point still for folks sitting back and ignoring the Tilera architecture or SeaMicro’s interesting cloud in a box designs. To date, it seems like Apple was first to market with a 64bit ARM design, without ARM actually supplying the base circuit design and layouts for the new generation of 64bit ARM. Apple instead did the heavy lifting and engineering themselves to get the 64bit memory addressing it needed to continue its drive to better battery life. Time will tell if this will herald other efficiency or performance improvements in raw compute power.

    Enhanced by Zemanta
  • SSDs are a Short-Term Phenomenon | EE Times

    DRAM SIL
    DRAM SIL (Photo credit: Wikipedia)

    This makes perfect sense. In 2011, Objective Analysis published a report (How PC NAND Will Undermine DRAM) that found, through nearly 300 benchmarks, that a dollar’s worth of flash yielded a bigger performance boost than a dollar’s worth of DRAM, once some minimum DRAM requirement was met. This minimum level was actually relatively low — between 1 and 2 GB, depending on the benchmark.

    via SSDs are a Short-Term Phenomenon | EE Times.

    Now Jim Handy is talking my language. Flash per dollar IS the best value for performance gain. I think if RAM chips were soldered directly to the motherboard and were sufficiently large (maybe not 1-2GB but say 4-8GB) and Flash was added as a secondary memory layer you would see some big boosts in computer performance. Look at DDR4 now, just entering the market. Performance gains from this generation are mostly aimed at retiring operations and completing tasks and going into low power sleep mode. It’s no longer about clock speeds, and registered rows and refresh cycles. It’s all about completing a read/write/flush operation and going to sleep as fast as you can to save power. Now that we’ve hit that plateau why not adopt Flash as the main memory with RAM as the 4th Level cache in principle? If people still want to upgrade/max out their RAM install, let them buy the higher end part, the enterprise data center chipset with extra sockets for DIMMs. Let the consumers have the fixed RAM allocation with RAM chips on the motherboard, and the all the DIMM sockets devoted to Flash memory instead. And with proper OS and chipset support, this revolution could happen overnight.

    IBM and SanDisk tickled our imaginations and now I’m wanting to see the X6 Server tech hit the broader consumer market. While we’ll still see SSDs and PCIe SSDs for some time to come, the real revolution is still waiting to take place, Flash DIMMs for everyone. If it takes 3 years to convince motherboard manufacturers and Intel and AMD to go this route, fine let’s get the discussion going now.m Start prototyping, and sampling chipsets and Flash DIMMs today. This might be enough of a product differentiator going forward as to make a desktop computer with upgradeable Flash DIMMs a hotter product than the consumer desktop is today.

    Enhanced by Zemanta
  • The Lytro Illum Is Where Light Field Technology Meets Real Photography

    Interesting revision of this Lytro technology. Reminds me a little bit of the Black Magic Cinema camera with it’s funny shaped body design. I remember the initial breathless reports about how earth-shaking this multi-lens camera was going to be. After that, never saw a shipping product or an actual review per se. I think maybe some samples were given to individuals who said it was cool you could set the depth of field after the picture was taken, or pull the subject into focus if it was initially shot out of focus.

    It reminds me in some ways of Carver Mead’s attempt to design a 3 layer cmos sensor that was light sensitive to all the wave lengths as you down through each layer. It was in essence a panchromatic sensor that did not require a micro light filter grid be bonded to the front surface (like all sensors today). That camera never really caught on either. It was extremely over-priced for the amount of resolution capable on the sensor. One conclusion you can make from this is not all good ideas make good workable cameras. We’ll see how Lytro Illum changes the equation, but I suspect it’s still a tough row to hoe.

    Enhanced by Zemanta