Archive for the ‘mobile’ Category
To do that, the researchers coated a lithium anode with a layer of hollow carbon nanospheres, to prevent the growth of the dendrites.
As research is being done on incremental improvements in Lithium Ion batteries, some occasional discoveries are being made. In this instance, the anode is being switched to pure lithium with a coating to protect the very reactive metal surface. The problem with using pure lithium is the growth of micro crystalline “dendrites”, kind of like stalagmites/stalactites in caves, along the whole surface. As the the dendrites build up, the anode loses it’s efficiency and that battery slowly loses it’s ability to charge all the way. This research has shown how to coat a pure lithium anode with a later of carbon nanotubes to help act as a permeable layer between the the electrolytic liquid in the battery and the pure lithium anode.
In past articles on Carpetbomberz.com we’ve seen announcements of other possible battery technologies like Zinc-Air, Lithium-Air and possible use of carbon nanotubes as a anode material. This announcement is promising in that it’s added costs might be somewhat smaller versus a wholesale change in battery chemistry. Similarly the article points out how much lighter elemental Lithium is versus the current anode materials (Carbon and Silicon). If the process of coating the anode is sufficiently inexpensive and can be done on a industrial production line, you will see this get adopted. But with most experiments like these, scaling up and lowering costs is the hardest thing to do. Hopefully this is one that will make it into shipping products and see the light of day.
The acquisition makes Blippar one of the largest AR players globally, giving it a powerful positioning in the AR and visual browsing space, which may help its adoption in the mass consumer space where AR has tended to languish.
Layar was definitely one of the first to get out there and promote Augmented Reality apps on mobile devices. Glad to see there was a enough talent and capability still resident there to make it worthwhile acquiring it. It’s true what they say in the article that the only other big name player in this field helping promote Augmented Reality is possibly Oculus Rift. I would add Google Glass to that mix as well, especially for AR (not necessarily VR).
Here’s my latest DIY project, a smartphone based on a Raspberry Pi. It’s called – wait for it – the PiPhone. It makes use an Adafruit touchscreen interface and a Sim900 GSM/GPRS module to make phone calls.
Dave Hunt doesn’t just do photography, he’s a Maker through and through. And the components are out there, you just need to know where to look to buy them. Once purchased then you get down to brass tacks of what IS a cellphone anyways. And that’s what Dave has documented in his write-up of the PiPhone. Hopefully an effort like this will spawn copycats enough to trigger a landslide in DIY fab and assembly projects for people that want their own. I think it would be cool to just have an unlocked phone I could use wherever I wanted with the appropriate carrier’s SIM card.
I think it’s truly remarkable that Dave was able to get Lithium ion gel battery packs and TFT displays that were touch sensitive. The original work of designing, engineering and manufacturing those displays alone made them a competitive advantage to folks like Apple. Being first to market with something that capable and forward expansive, was a true visionary move. Now the vision is percolating downward through the market and even so-called “feature” phones or dumb-phones might have some type of touch sensitive display.
This building by bits and pieces reminds me a bit of the research Google is doing in open hardware, modular cell phone designs like the Ara Project written up by Wired.com. Ara is an interesting experiment in divvying up the whole motherboard into block sized functions that can be swapped in and out, substituted by the owner according to their needs. If you’re not a camera hound, why spend the extra money on a overly capable, very high rez camera? Why not add a storage module instead because you like to watch movies or play games instead? Or in the case of open hardware developers, why not develop a new module that others could then manufacture themselves, with a circuit board or even a 3D printer? The possibilities are numerous and seeing an effort like what Dave Hunt did with his PiPhone as a lone individual working on his own, proves there’s a lot of potential in the open hardware area for cell phones. Maybe this device or future versions will break somewhat of the lock current monopoly providers have on their closed hardware, closed source code products.
Even Moverio’s less powerful (compared to VR displays) head tracking would make something like Google Glass overheat, McCracken said, which is why Glass input is primarily voice command or a physical touch. McCracken, who has developed for Glass, said that more advanced uses can only be accomplished with something more powerful.
Epson has swept in and gotten a head start on others in the smart glasses field. I think with their full head tracking system, and something like a Microsoft Xbox Kinect like projector and receiver pointed outward wherever you are looking, it might be possible to get a very realistic “information overlay”. Microsoft’s XBox Kinect has a 3D projector/scanner built-in which could potentially be another sensor built-in to the Epson glasses. The Augmented Reality apps on Moverio only do edge detection to provide the information overlay placement. If you had an additional 3D map (approximating the shapes and depth as well) you might be able to correlate the two data feeds (edges and a 3D mesh) to get a really good informational overlay at close range, normal arm’s length working distances.
Granted the Kinect is rather large in comparison to the Epson Moverio glasses. The resolution is also geared for longer distances too. At a very short distance XBox Kinect may not quite be what you’re looking for to improve the informational overlay. But an Epson Moverio paired up with a Kinect-like 3D projector/scanner could tie into the head tracking and allow some greater degree of accurate video overlay. Check out this video for a hack to use the Kinect as a 3D scanner:
Also as the pull-quote mentions Epson has done an interesting cost-benefit analysis and decided a smartphone level CPU and motherboard were absolutely necessary for making Moverio work. No doubt that light weight and miniature size of cellphones has by itself revolutionized the mobile phone industry. Now it’s time to leverage all that work and see what “else” the super power efficient mobile cpu’s can do along with their mobile gpu counterparts. I think this sudden announcement by Epson is going to cause a tidal wave of product announcements similar to the wave following the iPhone introduction in 2007. Prior to that Blackberry and it’s pseudo smartphone were the monopoly holders in the category they created (mobile phone as email browser). Now Epson is trying to show there’s a much wider application of the technology outside of Google Glass and Oculus Rift.
So for now, Cyclone’s performance is really used to exploit race to sleep and get the device into a low power state as quickly as possible.
Race to sleep, is the new, new thing for mobile cpus. Power conservation at a given clock speed is all done through parceling out a task and with more cores or higher clock speed. All cores execute and comple the task then cores are put to sleep or a much lower power state. That’s how you get things done and maintain a 10 hour battery life for an iPad Air or iPhone 5s.
So even though a mobile processor could be the equal of the average desktop cpu, it’s the race to sleep state that is the big differentiation now. That is what Apple’s adopting of a 64bit ARM vers. 8 architecture is bringing to market, the race to sleep. At the very beginning of the hints and rumors 64bit seemed more like an attempt to address more DRAM or gain some desktop level performance capability. But it’s all for the sake of executing quick and going into sleep mode to preserve the battery capacity.
I’m thinking now of some past articles covering the nascent, emerging market for lower power, massively parallel data center servers. 64bits was an absolute necessary first step to get ARM cpus into blades and rack servers destined for low power data centers. Memory addressing is considered a non-negotiable feature that even the most power efficient server must have. Didn’t matter what CPU it is designed around, memory address HAS got to be 64bits or it cannot be considered. That rule still applies today and will be the sticking point still for folks sitting back and ignoring the Tilera architecture or SeaMicro’s interesting cloud in a box designs. To date, it seems like Apple was first to market with a 64bit ARM design, without ARM actually supplying the base circuit design and layouts for the new generation of 64bit ARM. Apple instead did the heavy lifting and engineering themselves to get the 64bit memory addressing it needed to continue its drive to better battery life. Time will tell if this will herald other efficiency or performance improvements in raw compute power.
A pair of battery vendors are hoping that a new design which incorporates the use of an ultracapacitor material will help to improve and extend the life of lithium-ion battery packs.
First a little background info on what is a capacitor: https://en.wikipedia.org/wiki/Ultracapacitor#History
In short it’s like a very powerful, high density battery for smoothing out the “load” of an electrical circuit. It helps prevent spikes and dips in the electricity as it flows through a device. But with recent work done on ultra-capacitors they can be more like a full-fledged battery that doesn’t ever lose it’s charge over time. When they are combined up with a real live battery you can do some pretty interesting things to both the capacitor and the battery to help them work together, allowing longer battery life, higher total amount of charge capacity. Many things can flow from combining ultracapacitors with a really high end Lithium ion battery.
Any technology, tweak or improvement that promises at minimum 10% improvement over current Lithium ion battery designs is worth a look. They’re claiming a full 15% in this story from The Reg. And due to the re-design it would seem it needs to meet regulatory/safety approval as well. Having seen the JAL Airlines suffer battery issues on the Boeing 787, I couldn’t agree more.
There will be some heavy lifting needing to be done between now and when a product like this hits the market. Testing and failure analysis will ultimately decide whether or not this ultra-capacitor/Lithium ion hybrid is safe enough to use for consumer electronics. I’m also hoping Apple and other manufacturer/design outfits like Apple are putting some eyes, ears and phone calls on this to learn more. Samsung too might be interested in this, but are seemingly more reliant for battery designs outside of their company. That’s where Apple has the upperhand long term, they will design every part if needed in order to keep ahead of the competition.
The current paradigm has become increasingly complex, said Black, and HMC is a significant shift. It uses a vertical conduit called through-silicon via (TSV) that electrically connects a stack of individual chips to combine high-performance logic with DRAM die. Essentially, the memory modules are structured like a cube instead of being placed flat on a motherboard. This allows the technology to deliver 15 times the performance of DDR3 at only 30% of the power consumption.
Even though DDR4 memory modules have been around in quantity for a short time, people are resistant to change. And the need for speed, whether it’s SSD’s stymied by SATA-2 data throughput or being married to DDR4 ram modules, is still pretty constant. But many manufacturers and analysts wonder aloud, “isn’t this speed good enough?”. That is true to an extent, the current OSes and chipset/motherboard manufacturers are perfectly happy cranking out product supporting the current state of the art. But know one wants to be the first to continue to push the ball of compute speed down the field. At least this industry group is attempting to get a plan in place for the next gen DDR memory modules. With any luck this spec will continue to evolve and sampled products will be sent ’round for everyone to review.
Given changes/advances in the storage and CPUs (PCIe SSDs, and 15 core Xeons), eventually a wall will be hit in compute per watt or raw I/O. Desktops will eventually benefit from any speed increases, but it will take time. We won’t see 10% better with each generation of hardware. Prices will need to come down before any of the mainstream consumer goods manufacturers adopt these technologies. But as previous articles have stated the “time to idle” measurement (which laptops and mobile devices strive to achieve) might be reason enough for the tablet or laptop manufacturers to push the state of the art and adopt these technologies faster than desktops.