Mini PCI-Express Connector on Inspiron 11z Motherboard, Front (Photo credit: DandyDanny)
I don’t think there is any other way to say this other than to state that the XP941 is without a doubt the fastest consumer SSD in the market. It set records in almost all of our benchmarks and beat SATA 6Gbps drives by a substantial margin. It’s not only faster than the SATA 6Gbps drives but it surpasses all other PCIe drives we have tested in the past, including OCZ’s Z-Drive R4 with eight controllers in RAID 0. Given that we are dealing with a single PCIe 2.0 x4 controller, that is just awesome.
Listen well as you pine away for your very own SSD SATA drive. One day you will get that new thing. But what you really, really want is the new, NEW thing. And that my friends is quite simply the PCIe SSD. True the enterprise level purchasers have had a host of manufacturers and models to choose from in this form factor. But the desktop market cannot afford Fusion-io products at ~15K per card fully configured. That’s a whole different market there. RevoDrive has had a wider range of products that go from heights of Fusion-io down to the top end Gamer market with the RevoDrive R-series PCIe drives. But those have always been SATA drives piggy-backed onto a multi-lane PCIe card (4x or 8x depending on how many controllers were installed onboard the card). Here now the evolutionary step of dumping SATA in favor of a more native PCIe to NAND memory controller is slowly taking place. Apple has adopted it for the top end Mac Pro revision (the price and limited availability has made it hard to publicize this architectural choice). It has also been adopted in the laptops available since Summer 2013 that Apple produces (and I have the MacBook Air to prove it). Speedy, yes it is. But how do I get this on my home computer?
Anandtech was able to score an aftermarket card through a 3rd party in Australia along with a PCIe adapter card for that very Samsung PCIe drive. So where there is a will, there is a way. From that purchase of both the drive and adapter, this review of the Samsung PCIe drive has come about. And all one can say looking through all the benchmarks is we have not seen anything yet. Drive speeds which have been the bottle-neck in desktop and mobile computing since the dawn of the Personal Computer are slowly lifting. And not by a little but by a lot. This is going to herald a new age in personal computers that is as close to former Intel Chairman, Andy Grove’s 10X Effect. Samsung’s PCIe native SSD is that kind of disruptive, perspective altering product that will put all manufacturers on notice and force a sea change in design and manufacture.
As end users of the technology SSD’s with SATA interfaces have already had a big time impact on our laptops and desktops. But what I’ve been writing about and trying to find signs of ever since the first introduction of SSD drives is the logical path through the legacy interfaces. Whether it was ATA/BIOS or the bridge chips that glue the motherboard to the CPU, a number of “old” architecture items are still hanging around on the computers of today. Intel’s adoption of UEFI has been a big step forward in shedding the legacy bottleneck components. Beyond that native on CPU controllers for PCIe are a good step forward as well. Lastly the sockets and bridging chips on the motherboard are the neighborhood improvements that again help speed things up. The last mile however is the dumping of the “disk” interace, the ATA/SATA spec as a pre-requisite for reading data off of a spinning magnetic hard drive. We need to improve that last mile to the NAND memory chips and then we’re going to see the full benefit of products like the Samsung PCIe drive. And that day is nearly upon us with the most recent motherboard/chipset revision from Intel. We may need another revision to get exactly what we want, but the roadmap is there and all the manufacturers had better get on it. As Samsung’s driving this revolution,…NOW.
He continues, “People are upset about privacy, but in one sense they are insufficiently upset because they don’t really understand what’s at risk. They are looking only at the short term.” And to him, there is only one viable answer to these potential risks: “You’re going to control your own data.” He sees the future as one where individuals make active sharing decisions, knowing precisely when, how, and by whom their data will be used. “That’s the most important thing, control of the data,” he reflects. “It has to be done correctly. Otherwise you end up with something like the Stasi.”
Sounds a little bit like VRM and a little bit like Jon Udell‘s Thali project. Wearables don’t fix the problem of metadata being collected about you, no. You still don’t control those ingoing/outgoing feeds of information.
Sandy Pentland points out a lot can be derived and discerned simply from the people you know. Every contact in your friend list adds one more bit of intelligence about you without anyone ever talking to your directly. This kind of analysis is only possible now due to the End User License Agreements posted by each of the collecting entities (so-called social networking websites).
An alternative to this wildcat, frontier mentality by data collectors is Vendor Relationship Management (as proposed in the Cluetrain Manifesto) Doc Searls wants people to be able to share the absolute minimum necessary in order to get what they want or need from vendors on the Internet, especially the data collecting types. And then from that point if an individual wants to share more, they should get rewarded with a higher level of something in return from the people they share with (prime example are vendors, the ‘V’ in VRM).
Thali in another way allows you to share data as well. But instead of letting someone into your data mesh in an all or nothing way, it lets strongly identified individuals have linkages into our out of your own data streams whatever form those data streams may take. I think Sandy Pentland, Doc Searls and Jon Udell would all agree there needs to be some amount of ownership and control ceded back to the individual going forward. Too many of the vendors own the data and the metadata right now, and will do what they like with it including responding to National Security Letters. So instead of being a commercial venture, they are swiftly evolving into branches or defacto subsidiary of the National Security Agency. If we can place controls on the data, we’ll maybe get closer to the ideal of social networking and controlled data sharing.
The Center IT outfit I work for is dumping as much on premise Exchange Mailbox hosting as it can. However we are sticking with Outlook365 as provisioned by Microsoft (essentially an Outlook’d version of Hotmail). It has the calendar and global address list we all have come to rely on. But as this article goes into great detail on the rest of the Office Suite, people aren’t creating as many documents as they once did. We’re viewing them yes, but we just aren’t creating them.
I wonder how much of this is due in part to re-use or the assignment of duties to much higher top level people to become the authors. Your average admin assistant or even secretary doesn’t draft anything dictated to them anymore. The top level types now generally would be embarrassed to dictate something out to anyone. Plus the culture of secrecy necessitates more 1-to-1 style communications. And long form writing? Who does that anymore? No one writes letters, they write brief email or even briefer text, Tweets or Facebook updates. Everything is abbreviated to such a degree you don’t need thesaurus, pagination, or any of the super specialized doo-dads and add-ons we all begged M$ and Novell to add to their première word processors back in the day.
From an evolutionary standpoint, we could get by with the original text editors first made available on timesharing systems. I’m thinking of utilities like line editors (that’s really a step backwards, so I’m being really facetious here). The point I’m making is we’ve gone through a very advanced stage in the evolution of our writing tool of choice and it became a monopoly. WordPerfect lost out and fell by the wayside. Primary, Secondary and Middle Schools across the U.S. adopted M$ Word. They made it a requirement. Every college freshman has been given discounts to further the loyalty to the Office Suite. Now we don’t write like we used to, much less read. What’s the use of writing something so long in pages, no one will ever read it? We’ve jumped the shark of long form writing, and therefore the premiere app, the killer app for the desktop computer is slowly receding behind us as we keep speeding ahead. Eventually we’ll see it on the horizon, it’s sails being the last visible part, the crow’s nest, then poof! It will disappear below the horizon line. We’ll be left with our nostalgic memories of the first time we used MS Word.
Here’s my latest DIY project, a smartphone based on a Raspberry Pi. It’s called – wait for it – the PiPhone. It makes use an Adafruit touchscreen interface and a Sim900 GSM/GPRS module to make phone calls.
Dave Hunt doesn’t just do photography, he’s a Maker through and through. And the components are out there, you just need to know where to look to buy them. Once purchased then you get down to brass tacks of what IS a cellphone anyways. And that’s what Dave has documented in his write-up of the PiPhone. Hopefully an effort like this will spawn copycats enough to trigger a landslide in DIY fab and assembly projects for people that want their own. I think it would be cool to just have an unlocked phone I could use wherever I wanted with the appropriate carrier’s SIM card.
I think it’s truly remarkable that Dave was able to get Lithium ion gel battery packs and TFT displays that were touch sensitive. The original work of designing, engineering and manufacturing those displays alone made them a competitive advantage to folks like Apple. Being first to market with something that capable and forward expansive, was a true visionary move. Now the vision is percolating downward through the market and even so-called “feature” phones or dumb-phones might have some type of touch sensitive display.
This building by bits and pieces reminds me a bit of the research Google is doing in open hardware, modular cell phone designs like the Ara Project written up by Wired.com. Ara is an interesting experiment in divvying up the whole motherboard into block sized functions that can be swapped in and out, substituted by the owner according to their needs. If you’re not a camera hound, why spend the extra money on a overly capable, very high rez camera? Why not add a storage module instead because you like to watch movies or play games instead? Or in the case of open hardware developers, why not develop a new module that others could then manufacture themselves, with a circuit board or even a 3D printer? The possibilities are numerous and seeing an effort like what Dave Hunt did with his PiPhone as a lone individual working on his own, proves there’s a lot of potential in the open hardware area for cell phones. Maybe this device or future versions will break somewhat of the lock current monopoly providers have on their closed hardware, closed source code products.
One more reason to not be a Glasshole is you don’t want to be a sucker. Given what the Oculus Rift is being sold for versus Google Glass, one has to ask themselves why is Glass so much more expensive? It doesn’t do low latency stereoscopic 3D. It doesn’t have special eye adapters PROVIDED depending on your eyeglass correction. Glass requires you to provided prescription lenses if you really needed them. It doesn’t have large, full color, high rez AMOLED display. So why $1500 when Rift is $350? And even the recently announced Epson Moverio is priced at $700.
These days with the proliferation of teardown sites and the experts at iFixit and their partners at Chipworks, it’s just a matter of time before someone writes up your Bill of Materials (BOM). Once that’s hit the Interwebs and communicated widely all the business analysts and Wall Street Hedgefunders know how to predict the profit of the company based on sales. If Google retails Glass at the same price it is the development kits, it’s going to be real difficult to compete for very long given lower price and more capable alternatives. I appreciate what Google’s done making it lightweight and power efficient, but it’s still $80 in parts being sold at a mark-up of $1500. That’s the bottom line, that’s the Bill of Materials.
English: A TOSLINK fiber optic cable with a clear jacket that has a laser being shone onto one end of the cable. The laser is being shone into the left connector; the light coming out the right connector is from the same laser. (Photo credit: Wikipedia)
Currently available in lengths of 10 meters, Corning will also be releasing USB 3.Optical cables of 15 and 30 meters later this year. These cables can be purchased online at Amazon and Accu-Tech.
As I’ve had to deal with using webcams stretched across very long distances in classrooms and lecture halls, a 30 meter cable can be a godsend. I’ve used 10 meter long cables with built-in extenders and even that was a big step up. Here’s hoping prices eventually come down to a reasonable price level, say below $100. I’m impressed the power can run across the same cable with the optical fiber. I assume both ends are electrical-optical converters, meaning they need to be powered. So compared to CAT-5 cables with extenders it seems pretty light weight. No need for outlets to power the extenders on both ends.
Of course CAT-5 based extenders are still very price competitive and come in so many formats, USB 3.0 is trivial and probably more price competitive in the 30 meter range. But cable runs in CAT5 can be 50 to 100 meters for data running over TCP/IP on network switches. So CAT-5 with extenders converting to USB will still have the cost and performance advantage for some time to come.
Race to sleep, is the new, new thing for mobile cpus. Power conservation at a given clock speed is all done through parceling out a task and with more cores or higher clock speed. All cores execute and comple the task then cores are put to sleep or a much lower power state. That’s how you get things done and maintain a 10 hour battery life for an iPad Air or iPhone 5s.
So even though a mobile processor could be the equal of the average desktop cpu, it’s the race to sleep state that is the big differentiation now. That is what Apple’s adopting of a 64bit ARM vers. 8 architecture is bringing to market, the race to sleep. At the very beginning of the hints and rumors 64bit seemed more like an attempt to address more DRAM or gain some desktop level performance capability. But it’s all for the sake of executing quick and going into sleep mode to preserve the battery capacity.
I’m thinking now of some past articles covering the nascent, emerging market for lower power, massively parallel data center servers. 64bits was an absolute necessary first step to get ARM cpus into blades and rack servers destined for low power data centers. Memory addressing is considered a non-negotiable feature that even the most power efficient server must have. Didn’t matter what CPU it is designed around, memory address HAS got to be 64bits or it cannot be considered. That rule still applies today and will be the sticking point still for folks sitting back and ignoring the Tilera architecture or SeaMicro’s interesting cloud in a box designs. To date, it seems like Apple was first to market with a 64bit ARM design, without ARM actually supplying the base circuit design and layouts for the new generation of 64bit ARM. Apple instead did the heavy lifting and engineering themselves to get the 64bit memory addressing it needed to continue its drive to better battery life. Time will tell if this will herald other efficiency or performance improvements in raw compute power.
If Facebook buying Oculus for a cool $2 billion is a step towards democratizing the currently-niche platform, Jaunt seems like an equally monumental step towards making awesome virtual reality content that appeals to folks beyond the gaming community. The VR movies in addition to VR games.
Amazing story about a stealthy little company with a 3D video recording rig. This isn’t James Cameron like motion capture for 3D rendering. This is just 2D video in real time stitched together. No modeling, or texture-mapping, or animating required. Just run the video camera, capture the footage, bring it back to the studio and stitch it all together. Watch the production on your Oculus Rift head set. If you can produce 3D movies with this without having to invest in the James Cameron high end, ultra-expensive virtual sets, you just lowered the barriers to entry.
I’m also kind of disappointed that in the article the author keeps insisting that you “had to be there”. Telling us words cannot express the experience is like telling me in writing the “dog ate my homework”. I guess I “had to be there” for that too. Anyway you put it, telling me more about the company and the premises and about the prototypes means you’re writing for a Venture Capital audience, not someone who might make work using the camera or those who might consume the work made by the artists working with the camera. I say just cave into the temptation and TRY expressing the experience in words. Don’t worry if you fail, as you’ve just increased the comment rate on your story, engaging people longer after the initial date the story was published. In spite off the lack of daring, to describe the experience, I picked up enough detail, extrapolated it enough and read between the lines in a way that indicates this camera rig might well be the killer app, or authoring app for the Oculus Rift platform. Let’s hope it sees the light of day and makes it market quicker than the Google Glass prototypes floating around these days.
The president of VMware said after seeing it (and not knowing what he was seeing), “Wow, what movie is that?” And that’s what it’s all about — dispersion of disbelief. You’ve heard me talk about this before, and we’re almost there. I famously predicted at a prestigious event three years ago that by 2015 there would be no more human actors, it would be all CG. Well I may end up being 52% or better right (phew). – Jon Peddie
via Nvidia Pulls off ‘Industrial Light and Magic’-Like Tools | EE Times. Jon Peddie has covered the 3D animation, modeling and simulation market for YEARS. And when you can get a rise out of him like the quote above from EETimes, you have accomplished something. Between NVidia’s hardware and now its GameWorks suite of software modeling tools, you have in a word created Digital Cinema. Jon goes on to talk about how the digital simulation demo convinced a VMWare exec it was real live actors on a set. That’s how good things are getting.
And the metaphor/simile of comparing ILM to NVidia’s toolkits off the shelf is also telling. No longer does one need to have on staff computer scientists, physicists and mathematicians to help model, and simulate things like particle systems and hair. It’s all there along with ocean waves, and smoke altogether in the toolkit ready to use. Putting these tools into the hands of the users will only herald a new era of less esoteric, less high end, exclusive access to the best algorithms and tools.
nVidia GameWorks by itself will be useful to some people but re-packaging it in a way that embeds it in an existing workflow will widen the level of adoption.Whether that’s for a casual user or a student in a 3D modeling and animation course at a University. The follow-on to this is getting the APIs publishedto tap into this through current off the shelf tools like AutoCAD, 3D StudioMax, Blender, Maya, etc. Once the favorite tools can bring up a dialog box and start adding a particle system, full ray tracing to a scene at this level of quality, things will really start to take off. The other possibility is to flesh out GameWorks in a way that makes it more of a standalone, easily adopted brand new package creatives could adopt and eventually migrate to over time. That would be another path to using GameWorks as an end-to-end digital cinema creation package.
Lithium ion battery by Varta (Museum Autovision Altlußheim, Germany) (Photo credit: Wikipedia)
A pair of battery vendors are hoping that a new design which incorporates the use of an ultracapacitor material will help to improve and extend the life of lithium-ion battery packs.
First a little background info on what is a capacitor: https://en.wikipedia.org/wiki/Ultracapacitor#History
In short it’s like a very powerful, high density battery for smoothing out the “load” of an electrical circuit. It helps prevent spikes and dips in the electricity as it flows through a device. But with recent work done on ultra-capacitors they can be more like a full-fledged battery that doesn’t ever lose it’s charge over time. When they are combined up with a real live battery you can do some pretty interesting things to both the capacitor and the battery to help them work together, allowing longer battery life, higher total amount of charge capacity. Many things can flow from combining ultracapacitors with a really high end Lithium ion battery.
Any technology, tweak or improvement that promises at minimum 10% improvement over current Lithium ion battery designs is worth a look. They’re claiming a full 15% in this story from The Reg. And due to the re-design it would seem it needs to meet regulatory/safety approval as well. Having seen the JAL Airlines suffer battery issues on the Boeing 787, I couldn’t agree more.
There will be some heavy lifting needing to be done between now and when a product like this hits the market. Testing and failure analysis will ultimately decide whether or not this ultra-capacitor/Lithium ion hybrid is safe enough to use for consumer electronics. I’m also hoping Apple and other manufacturer/design outfits like Apple are putting some eyes, ears and phone calls on this to learn more. Samsung too might be interested in this, but are seemingly more reliant for battery designs outside of their company. That’s where Apple has the upperhand long term, they will design every part if needed in order to keep ahead of the competition.