Archive for the ‘mobile’ Category
The acquisition makes Blippar one of the largest AR players globally, giving it a powerful positioning in the AR and visual browsing space, which may help its adoption in the mass consumer space where AR has tended to languish.
Layar was definitely one of the first to get out there and promote Augmented Reality apps on mobile devices. Glad to see there was a enough talent and capability still resident there to make it worthwhile acquiring it. It’s true what they say in the article that the only other big name player in this field helping promote Augmented Reality is possibly Oculus Rift. I would add Google Glass to that mix as well, especially for AR (not necessarily VR).
Here’s my latest DIY project, a smartphone based on a Raspberry Pi. It’s called – wait for it – the PiPhone. It makes use an Adafruit touchscreen interface and a Sim900 GSM/GPRS module to make phone calls.
Dave Hunt doesn’t just do photography, he’s a Maker through and through. And the components are out there, you just need to know where to look to buy them. Once purchased then you get down to brass tacks of what IS a cellphone anyways. And that’s what Dave has documented in his write-up of the PiPhone. Hopefully an effort like this will spawn copycats enough to trigger a landslide in DIY fab and assembly projects for people that want their own. I think it would be cool to just have an unlocked phone I could use wherever I wanted with the appropriate carrier’s SIM card.
I think it’s truly remarkable that Dave was able to get Lithium ion gel battery packs and TFT displays that were touch sensitive. The original work of designing, engineering and manufacturing those displays alone made them a competitive advantage to folks like Apple. Being first to market with something that capable and forward expansive, was a true visionary move. Now the vision is percolating downward through the market and even so-called “feature” phones or dumb-phones might have some type of touch sensitive display.
This building by bits and pieces reminds me a bit of the research Google is doing in open hardware, modular cell phone designs like the Ara Project written up by Wired.com. Ara is an interesting experiment in divvying up the whole motherboard into block sized functions that can be swapped in and out, substituted by the owner according to their needs. If you’re not a camera hound, why spend the extra money on a overly capable, very high rez camera? Why not add a storage module instead because you like to watch movies or play games instead? Or in the case of open hardware developers, why not develop a new module that others could then manufacture themselves, with a circuit board or even a 3D printer? The possibilities are numerous and seeing an effort like what Dave Hunt did with his PiPhone as a lone individual working on his own, proves there’s a lot of potential in the open hardware area for cell phones. Maybe this device or future versions will break somewhat of the lock current monopoly providers have on their closed hardware, closed source code products.
Even Moverio’s less powerful (compared to VR displays) head tracking would make something like Google Glass overheat, McCracken said, which is why Glass input is primarily voice command or a physical touch. McCracken, who has developed for Glass, said that more advanced uses can only be accomplished with something more powerful.
Epson has swept in and gotten a head start on others in the smart glasses field. I think with their full head tracking system, and something like a Microsoft Xbox Kinect like projector and receiver pointed outward wherever you are looking, it might be possible to get a very realistic “information overlay”. Microsoft’s XBox Kinect has a 3D projector/scanner built-in which could potentially be another sensor built-in to the Epson glasses. The Augmented Reality apps on Moverio only do edge detection to provide the information overlay placement. If you had an additional 3D map (approximating the shapes and depth as well) you might be able to correlate the two data feeds (edges and a 3D mesh) to get a really good informational overlay at close range, normal arm’s length working distances.
Granted the Kinect is rather large in comparison to the Epson Moverio glasses. The resolution is also geared for longer distances too. At a very short distance XBox Kinect may not quite be what you’re looking for to improve the informational overlay. But an Epson Moverio paired up with a Kinect-like 3D projector/scanner could tie into the head tracking and allow some greater degree of accurate video overlay. Check out this video for a hack to use the Kinect as a 3D scanner:
Also as the pull-quote mentions Epson has done an interesting cost-benefit analysis and decided a smartphone level CPU and motherboard were absolutely necessary for making Moverio work. No doubt that light weight and miniature size of cellphones has by itself revolutionized the mobile phone industry. Now it’s time to leverage all that work and see what “else” the super power efficient mobile cpu’s can do along with their mobile gpu counterparts. I think this sudden announcement by Epson is going to cause a tidal wave of product announcements similar to the wave following the iPhone introduction in 2007. Prior to that Blackberry and it’s pseudo smartphone were the monopoly holders in the category they created (mobile phone as email browser). Now Epson is trying to show there’s a much wider application of the technology outside of Google Glass and Oculus Rift.
So for now, Cyclone’s performance is really used to exploit race to sleep and get the device into a low power state as quickly as possible.
Race to sleep, is the new, new thing for mobile cpus. Power conservation at a given clock speed is all done through parceling out a task and with more cores or higher clock speed. All cores execute and comple the task then cores are put to sleep or a much lower power state. That’s how you get things done and maintain a 10 hour battery life for an iPad Air or iPhone 5s.
So even though a mobile processor could be the equal of the average desktop cpu, it’s the race to sleep state that is the big differentiation now. That is what Apple’s adopting of a 64bit ARM vers. 8 architecture is bringing to market, the race to sleep. At the very beginning of the hints and rumors 64bit seemed more like an attempt to address more DRAM or gain some desktop level performance capability. But it’s all for the sake of executing quick and going into sleep mode to preserve the battery capacity.
I’m thinking now of some past articles covering the nascent, emerging market for lower power, massively parallel data center servers. 64bits was an absolute necessary first step to get ARM cpus into blades and rack servers destined for low power data centers. Memory addressing is considered a non-negotiable feature that even the most power efficient server must have. Didn’t matter what CPU it is designed around, memory address HAS got to be 64bits or it cannot be considered. That rule still applies today and will be the sticking point still for folks sitting back and ignoring the Tilera architecture or SeaMicro’s interesting cloud in a box designs. To date, it seems like Apple was first to market with a 64bit ARM design, without ARM actually supplying the base circuit design and layouts for the new generation of 64bit ARM. Apple instead did the heavy lifting and engineering themselves to get the 64bit memory addressing it needed to continue its drive to better battery life. Time will tell if this will herald other efficiency or performance improvements in raw compute power.
A pair of battery vendors are hoping that a new design which incorporates the use of an ultracapacitor material will help to improve and extend the life of lithium-ion battery packs.
First a little background info on what is a capacitor: https://en.wikipedia.org/wiki/Ultracapacitor#History
In short it’s like a very powerful, high density battery for smoothing out the “load” of an electrical circuit. It helps prevent spikes and dips in the electricity as it flows through a device. But with recent work done on ultra-capacitors they can be more like a full-fledged battery that doesn’t ever lose it’s charge over time. When they are combined up with a real live battery you can do some pretty interesting things to both the capacitor and the battery to help them work together, allowing longer battery life, higher total amount of charge capacity. Many things can flow from combining ultracapacitors with a really high end Lithium ion battery.
Any technology, tweak or improvement that promises at minimum 10% improvement over current Lithium ion battery designs is worth a look. They’re claiming a full 15% in this story from The Reg. And due to the re-design it would seem it needs to meet regulatory/safety approval as well. Having seen the JAL Airlines suffer battery issues on the Boeing 787, I couldn’t agree more.
There will be some heavy lifting needing to be done between now and when a product like this hits the market. Testing and failure analysis will ultimately decide whether or not this ultra-capacitor/Lithium ion hybrid is safe enough to use for consumer electronics. I’m also hoping Apple and other manufacturer/design outfits like Apple are putting some eyes, ears and phone calls on this to learn more. Samsung too might be interested in this, but are seemingly more reliant for battery designs outside of their company. That’s where Apple has the upperhand long term, they will design every part if needed in order to keep ahead of the competition.
The current paradigm has become increasingly complex, said Black, and HMC is a significant shift. It uses a vertical conduit called through-silicon via (TSV) that electrically connects a stack of individual chips to combine high-performance logic with DRAM die. Essentially, the memory modules are structured like a cube instead of being placed flat on a motherboard. This allows the technology to deliver 15 times the performance of DDR3 at only 30% of the power consumption.
Even though DDR4 memory modules have been around in quantity for a short time, people are resistant to change. And the need for speed, whether it’s SSD’s stymied by SATA-2 data throughput or being married to DDR4 ram modules, is still pretty constant. But many manufacturers and analysts wonder aloud, “isn’t this speed good enough?”. That is true to an extent, the current OSes and chipset/motherboard manufacturers are perfectly happy cranking out product supporting the current state of the art. But know one wants to be the first to continue to push the ball of compute speed down the field. At least this industry group is attempting to get a plan in place for the next gen DDR memory modules. With any luck this spec will continue to evolve and sampled products will be sent ’round for everyone to review.
Given changes/advances in the storage and CPUs (PCIe SSDs, and 15 core Xeons), eventually a wall will be hit in compute per watt or raw I/O. Desktops will eventually benefit from any speed increases, but it will take time. We won’t see 10% better with each generation of hardware. Prices will need to come down before any of the mainstream consumer goods manufacturers adopt these technologies. But as previous articles have stated the “time to idle” measurement (which laptops and mobile devices strive to achieve) might be reason enough for the tablet or laptop manufacturers to push the state of the art and adopt these technologies faster than desktops.
The first of three public workshops kicked off a conversation with the federal government on data privacy in the US.
by Andy Oram | @praxagora
Interesting topic covering a wide range of issues. I’m so happy MIT sees fit to host a set of workshops on this and keep the pressure up. But as Andy Oram writes, the whole discussion at MIT was circumscribed by the notion that privacy as such doesn’t exist (an old axiom from ex-CEO of Sun Microsystems, Scott McNealy).
No one at that MIT meeting tried to advocate for users managing their own privacy. Andy Oram mentions Vendor Relationship Management movement (thanks to Doc Searls and his Clue-Train Manifesto) as one mechanism for individuals to pick and choose what info and what degree the info is shared out. People remain willfully clueless or ignorant of VRM as an option when it comes to privacy. The shades and granularity of VRM are far more nuanced than the bifurcated/binary debate of Privacy over Security. and it’s sad this held true for the MIT meet-up as well.
Jon Podesta’s call-in to the conference mentioned an existing set of rules for electronic data privacy, data back to the early 1970s and the fear that mainframe computers “knew too much” about private citizens known as Fair Information Practices: http://epic.org/privacy/consumer/code_fair_info.html (Thanks to Electronic Privacy Information Center for hosting this page). These issues seem to always exist but in different forms at earlier times. These are not new, they are old. But each time there’s a debate, we start all over like it hasn’t ever existed and it has never been addressed. If the Fair Information Practices rules are law, then all the case history and precedents set by those cases STILL apply to NSA and government surveillance.
I did learn one new term from reading about the conference at MIT, Differential Security. Apparently it’s very timely and some research work is being done in this category. Mostly it applies to datasets and other similar big data that needs to be analyzed but without uniquely identifying an individual in the dataset. You want to find out efficacy of a drug, without spilling the beans that someone has a “prior condition”. That’s the sum effect of implementing differential privacy. You get the query out of the dataset, but you never once know all the fields of the people that make up that query. That sounds like a step in the right direction and should honestly apply to Phone and Internet company records as well. Just because you collect the data, doesn’t mean you should be able to free-wheel through it and do whatever you want. If you’re mining, you should only get the net result of the query rather than snoop through all the fields for each individual. That to me is the true meaning of differential security.
This week during Mobile World Congress 2014, SanDisk introduced the world’s highest capacity microSDXC memory card, weighing a hefty 128 GB. That’s a huge leap in storage compared to the 128 MB microSD card launched 10 years ago.
Amazing to think how small the form factor and how large the storage size has gotten with microSD format memory cards. I remember the introduction of SDXC cards and the jump from 32GB to 64GB flash SD sized cards. It didn’t take long after that before the SDXC format shrunk down to microSD format. Given the size and the options to expand the memory on certain devices (noticeably Apple is absent from this group), the size of the memory card is going to allow a lot longer timeline for the storage of pictures, music and video on our handheld devices. Prior to this, you would have needed a much larger m2 or mSATA storage card to achieve this level of capacity. You would have needed to have a tablet or a netbook to plug-in those larger memory cards.
Now you can have 128GB at your disposal just by dropping $200 at Amazon. Once you’ve installed it on your Samsung Galaxy you’ve got what would be a complete upgrade to a much more expensive phone (especially if it was an iPhone). I also think a SDXC microSD card would lend itself for moving a large amount of data in a device like one of these hollowed out nickels: http://www.amazon.com/2gb-MicroSD-Bundle-Mint-Nickel/dp/B0036VLT28
My interest in this would be taking a cell phone overseas and going through U.S. Customs and Immigration where it’s been shown in the past they will hold onto devices for further screening. If I knew I could keep 128GB of storage hidden in a metal coin that passed through the baggage X-ray without issue, I would feel a greater sense of security. A card this size is practically as big as the current hard drive on my home computer and work laptops. It’s really a fundamental change in the portability of a large quantity of personal data outside the series of tubes called the Interwebs. Knowing that stash could be kept away from prying eyes or casual security of hosting providers would certainly give me more peace of mind.
If there is any single number that people point to for resolution, it is the 1 arcminute value that Apple uses to indicate a “Retina Display”.
Earlier in my job where I work, I had to try and recommend the resolution people needed to get a good picture using a scanner or a digital camera. As we know the resolution arms race knows no bounds. First in scanners then in digital cameras. The same is true now for displays. How fine is fine enough. Is it noticeable, is it beneficial? The technical limits that enforce lower resolution usually are tied to costs. For the consumer level product cost has to fit into a narrow range, and the perceived benefit of “higher quality” or sharpness are rarely enough to get someone to spend more. But as phones can be upgraded for free and printers and scanners are now commodity items, you just keep slowly migrating up to the next model for little to no entry threshold cost. And everything is just ‘better’, all higher rez, and therefore by association higher quality, sharper, etc.
I used to quote or try to pin down a rule of thumb I found once regarding the acuity of the human eye. Some of this was just gained by noticing things when I started out using Photoshop and trying to print to Imagesetters and Laser Printers. At some point in the past someone decided 300 dpi is what a laser printer needed in order to reproduce text on letter size paper. As for displays, I bumped into a quote from an IBM study on visual acuity that indicated the human eye can discern display pixels in the 225 ppi range. I tried many times to find the actual publication where that appears so I could site it. But no luck, I only found it as a footnote on a webpage from another manufacturer. Now in this article we get more stats on human vision, much more extensive than that vague footnote all those years ago.
What can one conclude from all the data in this article? Just the same thing, that resolution arms races are still being waged by manufacturers. This time however it’s in mobile phones, not printers, not scanners, not digital cameras. Those battles were fought and now there’s damned little product differentiation. Mobile phones will fall into that pattern and people will be less and less Apple fanbois or Samsung fanbois. We’ll all just upgrade to a newer version of whatever phone is cheap and expect to always have the increased spec hardware, and higher resolution, better quality, all that jazz. It is one more case where everything old is new again. My suspicion is we’ll see this happen when a true VR goggle hits the market with real competitors attempting to gain advantage with technical superiority or more research and development. Bring on the the VR Wars I say.