google mobile support web standards wired culture

What’s a Chromebook good for? How about running PHOTOSHOP? • The Register

Netscape Communicator
Netscape Communicator (Photo credit: Wikipedia)

Photoshop is the only application from Adobe’s suite that’s getting the streaming treatment so far, but the company says it plans to offer other applications via the same tech soon. That doesn’t mean it’s planning to phase out its on-premise applications, though.

via What’s a Chromebook good for? How about running PHOTOSHOP? • The Register.

Back in 1997 and 1998 I spent a lot of time experimenting and playing with Netscape Communicator “Gold”. It had a built in web page editor that more or less gave you WYSIWYG rendering of the html elements live as you edited. It also had a Email client and news reader built into it. I spent also a lot of time reading Netscape white papers on their Netscape Communications server and LDAP server and this whole universe of Netscape trying to re-engineer desktop computing in such a way that the Web Browser was the THING. Instead of a desktop with apps, you had some app-like behavior resident in the web browser. And from there you would develop your Javascript/ECMAscript web applications that did other useful things. Web pages with links in them could take the place of Powerpoint. Netscape Communicator Gold would take the place of Word, Outlook. This is the triumvirate that Google would assail some 10 years later with its own Google Apps and the benefit of AJAX based web app interfaces and programming.

Turn now to this announcement by Adobe and Google in a joint effort to “stream” Photoshop through a web browser. A long time stalwart of desktop computing, Adobe Photoshop (prior to being bundled with EVERYTHING else) required a real computer in the early days (ahem, meaning a Macintosh) and has continued to do so even more (as the article points out) when CS4 attempted to use the GPU as an accelerator for the application. I note each passing year I used to keep up with new releases of the software. But around 1998 I feel like I stopped learning new features and my “experience” more or less cemented itself in the pre-CS era (let’s call that Photoshop 7.0) Since then I do 3-5 things at most in Photoshop ever. I scan. I layer things with text. I color balance things or adjust exposures. I apply a filter (usually unsharp mask). I save to a multitude of file formats. That’s it!

Given that there’s even a possibility to stream Photoshop on a Google Chromebook based device, I think we’ve now hit that which Netscape had discovered long ago. The web-browser is the desktop, pure and simple. It was bound to happen especially now with the erosion into different form factors and mobile OSes. iOS and Android have shown what we are willing to call an “app” most times is nothing more than a glorified link to a web page, really. So if they can manage to wire-up enough of the codebase of Photoshop to make it work in realtime through a web browser without tons and tons of plug-ins and client-side Javascript, I say all the better. Because this means architecturally speaking good old Outlook Web Access (OWA) can only get better and become more like it’s desktop cousin Outlook 2013. Microsoft too is eroding the distinction between Desktop and Mobile. It’s all just a matter of more time passing.

cloud google support

Testing, Testing: How Google And Amazon Can Help Make Websites Rock Solid – ReadWrite

English: Diagram showing overview of cloud com...

It’s not unprecedented: Google already offers a testing suite for Android apps, though that’s focused on making sure they run well on smartphones and tablets, not testing the cloud-based services they connect to. If Google added testing services for the websites and services those apps connect to, it would have an end-to-end lock on developing for both the Web and mobile.

via Testing, Testing: How Google And Amazon Can Help Make Websites Rock Solid – ReadWrite.

Load testing websites and web-apps is a market whose time has come. I know where I work we have Project group who has a guy who manages an installation of Silk as a load tester. Behind that is a little farm of old Latitude E6400s that he manages from the Silk console to point at whichever app is in development/QA/testing before it goes into production. Knowing there’s potential for a cloud-based tool for this makes me very, very interested.

As outsourcing goes, the Software as a Service (SaaS) or Platform as a Service (PaaS) or even Infrastructure as a Service (IaaS) categories are great as raw materials. But if there was just an app that I could login to, spin up some VMs install my load-test tool of choice and then manage them from my desktop, I would feel like I had accomplished something. Or failing that even just a toolkit for load testing with whatever tool du jour is already available (nothing is perfect that way) would be cool too. And better yet, if I could do that with an updated tool whenever I  needed to conduct a round of testing, the tool would take into account things like the Heart Bleed bug in a timely fashion. That’s the kind of benefit a cloud-based, centrally managed, centrally updated Load Test service could provide.

And now as Microsoft has just announced a partnership with Salesforce on their Azure cloud platform, things get even more interesting. Not only could you develop using an existing toolkit like, but host it on more than one cloud platform (AWS or Azure) as your needs change. And I would hope this would include unit test, load test and the whole sweet suite of security auditing one would expect for a webapp (thereby helping prevent vulnerabilities like HeartBleed OpenSSL).

Enhanced by Zemanta
support technology

Attempting to create an autounattend.xml file for work

Image representing Windows as depicted in Crun...
Image via CrunchBase


Starting with this website tutorial I’m attempting to create a working config file that will allow me to install new Windows 7 Professional installs without having to interact or click any buttons.


Seems pretty useful so far as Sergey provides an example autounattend file that I’m using as a template for my own. I particularly like his RunOnce registry additions. This makes it so much more useful than just simply being an answer file to the base OS install. True it is annoying that questions that come up through successive reboots during the specialize pass on a Windows 7 fresh install. But this autounattend file does a whole lot of default presetting behind that  scenes, and that’s what I want when I’m trying create a brand new WIM image for work. I’m going to borrow those most definitely.


I also discovered an interesting sub-section devoted to joining a new computer to a Domain. Ever heard of djoin.exe?


Very interesting stuff where you can join the computer without first having to login to the domain controller and create a new account in the correct OU (which is what I do currently) and save a little time putting the Computer on the Domain. Sweet. I’m a hafta check this out further and get the syntax down just so… Looks like there’s also a switch to ‘reuse’ an existing account which would be really handy for computers that I rebuild and add back using the same machine name. That would save time too. Looks like it might be Win7/Server2008 specific and may not be available widely where I work. We have not moved our Domains to Server 2008 as far as I know.


djoin /provision /domain to be joined> /machine /savefile blob.txt (What’s new in Active Directory Domain Services in Windows Server 2008 R2: Offline Domain provisioning.


Also you want to be able to specify the path in AD where the computer account is going to be created. That requires knowing the full syntax of the LDAP:// path in AD


There’s also a script you can download and run to get similar info that is Win 2000 era AD compliant:


Random Thoughts just now: I could create a Generic WIM with a single folder added each time and Appended to the original WIM that included the Windows CAB file for that ‘make/model’ from Dell. Each folder then could have DPInst copied into it and run as a Synchronous command during OOBE pass for each time the WIM is applied with ImageX. Just need to remember which number to use for each model’s set of drivers. But the description field for each of those appended driver setups could be descriptive enough to make it user friendly. Or we could opt just to include the 960 drivers as a base set covering most bases and then provide links to the CAB files over \\fileshare\j\deviceDrivers\ and let DPInst recurse its way down the central store of drivers to do the cleanup phase.


OK, got a good autounattend.xml formulated. Should auto-activate and register the license key no problem-o. Can’t wait to try it out tomorrow when I get home on the test computer I got setup. It’s an Optiplex 960 and I’m going to persist all the Device Drivers after I run sysprep /generalize /shutdown /oobe and capture the WIM file. Got a ton of customizing yet to do on the Admin profile before it gets copied to the Default Profile on the sysprep step. So maybe this time round I’ll get it just right.


One big thing I have to remember is to set IE 8 to pass all logon information for the Trusted Sites Zone within the security settings. If I get that embedded into the thing once and for all I’ll have a halfway decent image that mirrors what we’re using now in Ghost. Next steps once this initial setup from a Win7 setup disk is perfected is to tweak the Administrator’s profile then set copy profile=true when I run Sysprep /generalize /oobe /config:unattend.xml (that config file is another attempt to filter the settings of what gets kept and what is auto-run before the final OOBE phase on the Windows setup). That will be the last step in the process.



computers support technology

End of the hiatus

I am now at a point in my daily work where I can begin posting to my blog once again. It’s not so much that I’M catching up, but more like I don’t care as much about falling behind. Look forward to more Desktop related posts as that is now my fulltime responsibility there where I work.

Posted from WordPress for Windows Phone

blogroll macintosh support technology

Daring Fireball: Mountain Lion

Wrestling with Mountain Lion

And then the reveal: Mac OS X — sorry, OS X — is going on an iOS-esque one-major-update-per-year development schedule. This year’s update is scheduled for release in the summer, and is ready now for a developer preview release. Its name is Mountain Lion.1

via Daring Fireball: Mountain Lion.

Mountain Lion is the next iteration of Mac OS X. And while there are some changes since the original Lion was released just this past Summer, they are more like further improvements than real changes. I say this in part due to the concentration on aligning the OS X apps with iOS apps for small things like using the same name:

iCal versus Calendar

iChat versus Messages

Address book versus Contacts

Reminders versus Notes


Under the facial, superficial level more of the Carbonized libraries and apps are being factored out and being given full Cocoa libraries and app equivalents where possible. But one of the bigger changes, one that’s been slipping since the release of Mac OS X 10.7 is the use of ‘Sand-boxing’ as a security measure for Apps. The sand-box would be implemented by the Developers to adhere to strict rules set forth by Apple. Apps wouldn’t be allowed to do certain things anymore like writing to an external Filesystem, meaning saving or writing out to a USB drive without special privileges being asked for. Seems trivial at first but on the level of a day to day user of a given App it might break it altogether. I’m thinking of iMovie as an example where you can specify you want new Video clips saved into an Event Folder kept on an external hard drive. Will iMovie need to be re-written in order to work on Mountain Lion? Will sand-boxing hurt other Apple iApps as well?

Then there is the matter of ‘GateKeeper’ which is another OS mechanism to limit trust based on who the developer. Apple will issue security certificates to registered developers who post their software through the App Store, but independents who sell direct can also register for these certs as well, thus establishing a chain of trust from the developer to Apple to the OS X user. From that point you can choose to trust either just App store certified apps, independent developers who are Apple certified or unknown, uncertified apps. Depending on your needs the security level can be chosen according to which type of software you use. some people are big on free software which is the least likely to have a certification, but still may be more trustworthy than even the most ‘certified’ of AppStore software (I’m thinking emacs as an example). So sandboxes, gatekeepers all conspire to funnel developers into the desktop OS and thus make it much harder for developers of malware to infect Apple OS X computers.

These changes should be fully ready for consumption upon release of the OS in July. But as I mentioned sandboxing has been rolled back no less than two times so far. First roll-back occurred in November. The most recent rollback was here in February. The next target date for sandboxing is in June and should get all the Apple developers to get on board  prior to the release of Mountain Lion the following month, in July. This reminds me a bit of the flexibility Apple had to show in the face of widespread criticism and active resistance to the Final Cut Pro X release last June. Apple had to scramble for a time to address concerns of bugs and stability under Mac OS X 10.7 (the previous Snow Leopard release seemed to work better for some who wrote on Apple support discussion forums). Apple quickly came up with an alternate route for dissatisfied customers who demanded satisfaction by giving copies of Final Cut Pro Studio 7 (with just the Final Cut Pro app included) to people who called up their support lines asking to substitute the older version of the software for a recent purchase of FCP X. Flexibility like this seems to be more frequent going forward which is great to see Apple’s willingness to adapt to an adverse situation of their own creation. We’ll see how this migration goes come July.

Mac OS X logo
Image via Wikipedia
computers science & technology support vague interests

History of Sage

A screenshot of Sagemath working.
Image via Wikipedia

The Sage Project Webpage

Sage is mathematical software, very much in the same vein as MATLAB, MAGMA, Maple, and Mathematica. Unlike these systems, every component of Sage is GPL-compatible. The interpretative language of Sage is Python, a mainstream programming language. Use Sage for studying a huge range of mathematics, including algebra, calculus, elementary to very advanced number theory, cryptography, numerical computation, commutative algebra, group theory, combinatorics, graph theory, and exact linear algebra.

Explanation of what Sage does by the original author William Stein 

(Long – roughly 50 minutes)

Original Developer and his history of Sage mathematical software development. Wiki listing with a list of participating commiters. Discussion lists for developers: Mostly done through Google Groups with associated RSS feeds. Mercurial Repository (start date Sat Feb 11 01:13:08 2006) Gonzalo Tornaria seems to have loaded the project in at this point. Current List of source code in TRAC with listing of commiters for the most recent release of Sage (4.7).

  • William Stein (wstein) Still very involved based on freqenecy of commits
  • Michael Abshoff (mabs) Ohloh has him ranked second only to William Stein with commits and time on project. He’s now left the project according to the Trac log.
  • Jeroen Demeyer (jdemeyer) commits a lot
  • J.H.Palmieri (palmieri) has done  number of tutorials and documentation he’s on the IRC channel
  • Minh Van Nguyen (nguyenminh2) has done some tutorials,documentation and work Categories module. He also appears to be the sysadmin on the Wiki
  • Mike Hansen (mhansen) Is on the IRC channel and is a big contributor
  • Robert Bradshaw (robertwb) has done some very recent commits

Changelog for the most recent release (4.7) of Sage. Moderators of Keshav Kini (who maintains the Ohloh info) & Big milestone release of version 4.7 with tickets listed here based on modules: Click Here. And the Ohloh listing of top contributors to the project. There’s an active developer and end user community. Workshops are tracked here. Sage Days workshops tend to be hackfests for interested parties. But more importantly Developers can read up on this page, how to get started and what the process is as a Sage developer.

Further questions that need to be considered. Look at the git repository and the developer blogs ask the following questions:

  1. Who approves patches? How many people? (There’s a large number of people responsible for reviewing patches, if I had to guess it could be 12 in total based on the most recent changelog)
  2. Who has commit access? & how many?
  3. Who is involved in the history of the project? (That’s pretty easy to figure out from the Ohloh and Trac websites for Sage)
  4. Who are the principal contributors, and have they changed over time?
  5. Who are the maintainers?
  6. Who is on the front end (user interface) and back end (processing or server side)?
  7. What have been some of the major bugs/problems/issues that have arisen during development? Who is responsible for quality control and bug repair?
  8. How is the project’s participation trending and why? (Seems to have stabilized with a big peak of 41 contribs about 2 years ago, look at Ohloh graph of commits, peak activity was 2009 and 2010 based on Ohloh graph).

Note the period over which the Gource visualization occurs is since 2009, earliest entry in the Mercurial repository I could find was 2005. Sage was already a going concern prior to the Mercurial repository being put on the web. So the simulation doesn’t show the full history of development.

blackboard data center support technology

First Sungard goes private and now Blackboard

The buyers include Bain Capital, the Blackstone Group, Goldman Sachs Capital Partners, Kohlberg Kravis Roberts, Providence Equity Partners and Texas Pacific Group. The group is led by Silver Lake Partners. The deal is a leveraged buyout – Sungard will be taken private and its shares removed from Wall Street.

via Sungard goes private • The RegisterPosted in CIO29th March 2005 10:37 GMT

RTTNews – Private equity firm Providence Equity Partners, Inc. agreed Friday to take educational software and systems provider Blackboard, Inc. (BBBB: News ) private for $45 per share in an all-cash deal of $1.64 billion. 7/1/2011 8:53 AM ET

It would appear now that Providence Equity Partners owns two giants in the Higher Ed outsourcing industry Sungard and Blackboard. What does this mean? Will there be consolidation where there is overlap between the two companies? Will there be attempts to steal customers or upsell each other’s products?

cloud data center flash memory SSD support technology

Artur Bergman Wikia on SSDs @ OReilly Media Conferences/Don Bazile CEO of Violin Memory

Image representing Violin Memory as depicted i...
Image via CrunchBase

Artur Bergman of Wikia explains why you should buy and use Solid State Disks (strong language)

via Artur Bergman Wikia on SSDs on OReilly Media Conferences – live streaming video powered by Livestream.

This is the shortest presentation I’ve seen and most pragmatic about what SSDs can do for you. He recommends buying Intel 320s and getting your feet wet by moving from a bicycle to a Ferrari. Later on if you need to go with a PCIe SSD do it, but it’s like the difference between a Formula 1 race car and a Ferrari. Personally in spite of the lack of major difference Artur is trying to illustrate I still like the idea of buying once and getting more than you need. And if this doesn’t start you down the road of seriously buying SSDs of some sort check out this interview with Violin Memory CEO, Don Bazile:

Violin tunes up for billion dollar flash gig: Chris (Saturday June 25th)

Basile said: “Larry is telling people to use flash … That’s the fundamental shift in the industry. … Customers know their competitors will adopt the technology. Will they be first, second or last in their industry to do so? … It will happen and happen relatively quickly. It’s not just speed; its the lowest cost of data base transaction in history. [Flash] is faster and cheaper on the exact same software. It’s a no-brainer.”

Violin Memory is the current market leader in data center SSD installations for transactional data or analytical processing. The boost folks are getting from putting the databases on Violin Memory boxes is automatic, requires very little tuning and the results are just flat out astounding. The ‘Larry’ quoted above is the Larry Ellison of Oracle, the giant database maker. So with that kind of praise I’m going to say the tipping point is near, but please read the article. Chris Mellor lays out a pretty detailed future of evolution in SSD sales and new product development. 3-bit Multi-Level memory cells in NAND flash is what Mellor thinks will be the tipping point as price is still the biggest sticking point for anyone responsible for bidding on new storage system installs. However while that price sticking point is a bigger issue for batch oriented off-line data warehouse analysis, for online streaming analysis SSD is cheaper per byte per second throughput. So depending on the typical style of database work you do or performance you need SSD is putting the big iron spinning hard disk vendors to shame. The inertia of these big capital outlays and cozy relationships with these vendors will make some shops harder to adopt the new technology (But IBM is giving us such a big discount!…WE are an EMC shop,etc.). However the competitors of the folks owning those datacenters will soon eat all that low hanging fruit a simple cutover to SSDs will afford and the competitive advantage will swing to the early adopters.

*Late Note: Chris Mellor just followed up Monday night (June 27th) with an editorial further laying out the challenge to disk storage presented by the data center Flash Array vendors. Check it out:

What should the disk drive array vendors do, if this scenario plays out?They should buy in or develop their own all-flash array technology. Having a tier of SSD storage in a disk drive array is a good start but customers will want the simpler choice of an all-flash array and, anyway, they are here now. Guys like Violin and Whiptail and TMS are knocking on the storage array vendors customer doors right now.

via All aboard the flash array train? • The Register.

support technology

EDS mainframe goes titsup, crashes RBS cheque system • The Register

HP managers are reaping the harvest of their deep cost-cutting at EDS, in the form of a massive mainframe failure that crippled some very large clients, including the taxpayer-owned bank RBS.

via EDS mainframe goes titsup, crashes RBS cheque system • The Register.

Royal Bank of Scotland
Royal Bank of Scotland had a big datacenter outage

The Royal Bank of Scotland is a National Bank and a big player in the European banking market. In Datacenter speak 5 Nines of availability is a guarantee the computer will stay up and running 99.999% of the time. This roughly calculates to 5.26 minutes of downtime allowed PER YEAR. This Royal Bank of Scotland computer was down 12Hours which tranlates to 99.8% Reliability. I think HP and EDS owe some people money for breaking the terms of their contract. It just proves outsourcing is not a cure-all for cost savings. You as the customer don’t know when they are going to start dropping head count to inflate the value of their stock on Wall Street. And when the economy soured, they dropped head count, like you wouldn’t believe. What does that mean for outstanding contracts to provide datacenter services? Well it means all bets are off, you get what ever they are willing to give you. If you are employed to make and manage contracts like this for your company be forewarned. Your outsourcing company can fire everyone at the drop of a hat.

computers support technology

Email support vs. Trouble Ticket

Each of the people on the chain had to waste time navigating the chain merely because everyone else was too lazy to summarize what had happened up to that point.


Don’t I know it! Which makes me think about what other technologies we’ve adopted for help resolve enduser problems. Prior to working at my current job at a University I never had heard of the term ‘trouble ticket’. But soon like so many corporate trends outside of the University, this one slowly infiltrated the educational enterprise, only through our telecom help desk. The level of support their helpd desk attendants were expected to fulfill meant they needed to have full logs of every call that came in, and then a running log of steps followed to remediate the problem. That way no matter who was on call, who was on vacation, the work could be assigned to ‘somebody’ and the work would get done. Endusers love guarantees that someone is responsible and will do the work to fix the problem.

Faster Forward to my poor friend @ All he gets is email forwarded from person to person, with no log other than the reply field from a previous recipient. Which is never detailed enough to determine what’s been tried and what hasn’t been tried. Oh I feel your pain. Unraveling the email mess to get to what the original problem is through email sucks no doubt. Maybe one issue is the number of intermediaries who couldn’t fix the problem? If the first point of contact had sent the email straight to Dan, he wouldn’t be sorting through any intermediate steps. But I think there’s a real problem with the workflow of how a technical problem is escalated to someone with greater knowledge, experience, expertise.

So an executive decides to impose a trouble-ticketing system to help codify that workflow, right? The rush to trouble-ticketing systems is no help for very similar reasons. It comes down to the human tendency for what is expedient. Whether it’s forwarding an email or assigning a trouble-ticket, you’re not “SOLVING” the problem. You’re merely “CONVEYING” the problem. My experience with trouble-tickets is just as bad as it is with email. People don’t assign categories, people can’t be bother with logging what they tried and failed at doing, and worse yet with the routing of the tickets you get assigned something sometimes by algorithm, not by an actual person. And unlike email you can’t simply delete it and say you never got the assignment. Those tickets are the measure of your productivity. Every manager gets the report monthly how many tickets collected, how many closed out, how many outstanding. Those outstanding tickets are the strikes  against you as a service/support person no matter what your  actual title or responsibilities are. So whether it’s email or trouble-tickets it’s the same damned problem.

I will say though that the logging inherent in a trouble-ticket system will at least give you some insitutional memory or history of a particular problem, that the email completely obfuscates. But I agree with Dan, there’s got to be a better way that doesn’t rely on the Desktop/File Cabinet metaphor. Thinking about it more, the whole metaphor seems to be geared towards ‘collecting things’. You collect new emails, you file them away. You collect new tasks in Outlook, you file them away. You download a document to your desktop, you set it in a certain spot. You go to someone else’s computer in their home, you look at their desktop and it is FILLED WITH ICONS! Desktops and Filing Cabinets are for collectors. Dan is not a collector. Dan is the one fixing the problem, so he needs a metaphor that fits better.