That’s right, the fault dear reader is not in our stars but in ourselves. We have slow internets speeds, ‘Cuz Business. That’s the briefest synopsis yet that I’ve written. Whether it’s carriers allow each others traffic to run across their networks or peering arrangements or whatever, each business is trying to mess with the other guy’s traffic. And the consumers the customers all lose as a result.
Originally posted on Consumerist:
Various enormous corporations have this year been at each other’s throats over how well or how poorly internet traffic travels through their systems. A new report indicates that some of the mud-slinging this year is true: interconnection, or peering, between ISPs is why end-users are getting terrible internet traffic. But, they say, it’s business, and not technology, that’s making your Netflix buffer.
DSL Reports points the way to the study, from an internet research organization called M-Lab. M-Lab studied how traffic does (or doesn’t) make it to you through the peering connections it travels through.
Peering has come up a lot this year, most notably around Netflix. The streaming-video behemoth contended that major ISPs — particularly but not solely Comcast and Verizon — were deliberately letting Netflix traffic clog up.
The congestion was happening at interconnection points, the places where the transit ISPs Netflix partnered with…
View original 645 more words
I’m always fascinated by these one-off, one of a kind clustered systems like this Raspberry Pi rig. Kudos for doing the assembly and getting it all running. As the comments mention it may not be practical in terms of price. But still it’s pretty cool for what it is.
Originally posted on Hackaday:
[alexandros] works for resin.io, a website which plans to allow users to update firmware on embedded devices with a simple git push command. The first target devices will be Raspberry Pis running node.js applications. How does one perform alpha testing while standing up such a service? Apparently by building a monster tower of 120 Raspberry Pi computers with Adafruit 2.8″ PiTFT displays. We’ve seen some big Raspberry Pi clusters before, but this one may take the cake.
The tower is made up of 5 hinged sections of plywood. Each section contains 24 Pis, two Ethernet switches and two USB hubs. The 5 sections can be run on separate networks, or as a single 120 node monster cluster. When the sections are closed in, they form a pentagon-shaped tower that reminds us of the classic Cray-1 supercomputer.
Rasberry Pi machines are low power, at least when compared to a desktop PC. A standard Raspi consumes less…
View original 65 more words
Photoshop is the only application from Adobe’s suite that’s getting the streaming treatment so far, but the company says it plans to offer other applications via the same tech soon. That doesn’t mean it’s planning to phase out its on-premise applications, though.
Turn now to this announcement by Adobe and Google in a joint effort to “stream” Photoshop through a web browser. A long time stalwart of desktop computing, Adobe Photoshop (prior to being bundled with EVERYTHING else) required a real computer in the early days (ahem, meaning a Macintosh) and has continued to do so even more (as the article points out) when CS4 attempted to use the GPU as an accelerator for the application. I note each passing year I used to keep up with new releases of the software. But around 1998 I feel like I stopped learning new features and my “experience” more or less cemented itself in the pre-CS era (let’s call that Photoshop 7.0) Since then I do 3-5 things at most in Photoshop ever. I scan. I layer things with text. I color balance things or adjust exposures. I apply a filter (usually unsharp mask). I save to a multitude of file formats. That’s it!
The software ecosystem for ARM servers “is still shaky, there needs to be a lot more software development going on and it will take time,” says Gwennap.
Previous generations of multi-core, massively parallel, ARM based servers were one off manufacturers with their own toolsets and Linux distros. HP’s attempt to really market to this segment of the market will hopefully be substantial enough to get an Ubuntu distro that has enough Libraries and packages to make it function right out of the box. In the article it says companies are using the Proliant ARM-based system as a memcached server. I would speculate that if that’s what people want, the easier you can make that happen from an OS and app server standpoint the better. There’s a reason folks like to buy Synology and BuffaloTech NAS products and that’s the ease with which you spin them up and get a lot of storage attached in a short amount of time. If Proliant can do that for people needing quicker and more predictable page loads on their web apps, then optimize for memcached performance and make it easy to configure and put into production.
Now what, you may ask, is memcached? If you’re running a web server or a web application that requires a lot of speed so that purchases or other transactions complete and show some visual cue that it was successful, the easiest way to do that is through cacheing. The web page contents are kept in a high speed storage location separate from the actual webpage and when required will redirect, or point to the stuff that sits over in that high speed location. By swapping the high speed stored stuff for the slower stuff, you get a really good experience with the web page refreshing automagically showing your purchases in a shopping cart, or that your tax refund is on it’s way. The web site world is built on caching so we don’t see spinning watches or other indications that processing is going on in the background.
To date, this type of caching has seen different software packages do this for first Apache web servers, but now in the world of Social Media, it’s doing it for any type of web server. Whether it’s Amazon, Google or Facebook, memcached or a similar cacheing server is sending you that actual webpage as you click, submit and wait for the page to refresh. And if a data center owner like Amazon, Google and Facebook can lower the cost for each of it’s memcached servers, they can lower their operating costs for each of these cached web pages and keep everyone happy with the speed of their respective websites. Whether or not ARM-based servers see a wider application is dependent on the apps being written specifically for that chip architecture. But at least now people can point to memcached and web page acceleration as a big first win that might see wider adoption longer term.
Hats off and kudos to Consumer Reports for getting on this story as soon as they could. Measurement trumps anecdotes any and all days of the week. Here now some data and measurements regarding the bendy iPhone 6 and 6 Plus.
Originally posted on 9to5Mac:
Consumer Reports released a new video today taking on claims of overly-flexible iPhones that have appeared online recently. Apple noted that only a handful of complaints have come in and gave journalists a look at its testing procedures. Regardless of Cupertino’s claims, Consumer Reports kept its promise to conduct testing that was a bit more scientific in nature than previous YouTube videos.
To address these claims, several different phones were tested under up to 150 pounds of pressure to see when each model would stop “snapping back” to its original shape. The devices tested were the iPhone 6 Plus, iPhone 6, iPhone 5, HTC One M8, Samsung Galaxy Note 3, and LG G3.
View original 212 more words
The SanDisk Extreme PRO UHS-I SDHC/SDXC family includes 128 GB, 256 GB and 512 GB capacities. The new 512 GB card costs $799.99 and is available now.
Interesting to finally see this form factor hit the market. These cards now are as big or bigger than the typical laptop hard drive. That’s a big deal in that any computer fortified with an SDXC card slot can have a flash based back-up store. I keep my Outlook mail archives on an a drive like this. And occasionally I use it to transfer files the way I would do with a reliable USB flash drive. And this on a laptop that already has an SSD, so I’ve got 2 tiers of this kind of storage. We’re reaching a kind of singularity in flash based storage where the chips and packaging are allowing for such small form factors, hard drives become moot. If I can stuff something this small into a slot roughly the size of a U.S. postage stamp, then why do I need an SATA or even an M.2 sized interface? Is it just for the sake of throughput and performance? That may be the only real argument.
I hope Oculus can get a shipping product out on the market soon. Perfectionism is not helping launch this market. The longer they wait, the more chance there’ll be a cheaper equally well working competitor. Pleez Oculus, release the Rift.
Originally posted on TechCrunch:
Oculus gave the world the first look at its new prototype Crescent Bay today at the Oculus’ Connect conference (livestream), and I got the very first hands-on demo. Crescent Bay has a faster frame rate, 360-degree head tracking, and integrated headphones, plus it’s lighter.
Oculus also announced the new Oculus Platform coming to the Samsung VR, which brings VR to a large audience through mobile apps, web browsers, and a VR content discovery channel. You can read our full story on Oculus Platform here.
CEO Brendan Iribe called Crescent Bay as big of a step up from the DK2 as the DK2 was from the DK1. This still isn’t a consumer version, but it’s getting closer.
The Crescent Bay is not an official developer kit, but instead a “feature prototype” designed to show off the future of what Oculus is doing, similar to the pre-DK2 “Crystal Cove”
View original 1,098 more words