Blog

  • Tuesday Night Deliverable: Build Mozilla (that’s right build your web browser)

    I knew ahead of time the path of least resistance based on reading the Mozilla Build page would be to go with Fedora and follow the directions explicitly. I already had a Fedora LXDE install that I managed to screw up NetworkManager on just today. And Chris Tyler had supplied us with shiny new Full Fedora 15 Live disks, so I figured, Why not just do the install and get a working Fedora back on that old partition. So I ran the install before dinner, tested it out everything was working and around 6:30 or so started on the Mozilla build instructions.

    Luckily everything about my install was 100% vanilla un-customized save for the fact Gnome3 desktop would not run on the Integrated Intel graphics chip (oh well). I actually cut and pasted each command line direct from the web page into my terminal and got the Developer tools downloaded and updated. I got mercurial all squared away then did the clone of the Firefox repository. That didn’t take long, but then came the make build, and that took a while. Three hours of chugging along on a circa 2003 Low Voltage Intel 830 cpu with about 1.2Ghz and 640MB of RAM. I did however upgrade that internal HD to 250GB so plenty of swap space to be had there.

    Noticed the build was taking so long, I had plenty of time to sign into the IRC channel and put in a status report. Chris was there and immediately recognized the RAM starvation issue. So I just patiently checked back to make sure that laptop wasn’t sleeping as it worked away. Three hours later, just as I was worrying it might not finish before I went to bed, I started seeing some concluding messages from make, and voila it was done. The Mozilla build directions tell you to go into dist/bin/ and run firefox from there. On my laptop I had to go into a platform specific folder (something-gnu-something-i686) first then I found the dist/bin/firefox. Launched firefox and it ran. So I think I picked the right OS as I didn’t have any path idiosyncrasies to sort out or any missing libraries or binaries either. Pretty straightforward on Fedora 15 32-bit Intel. Two thumbs up.

  • History of Sage

    A screenshot of Sagemath working.
    Image via Wikipedia

    The Sage Project Webpage http://www.sagemath.org/

    Sage is mathematical software, very much in the same vein as MATLAB, MAGMA, Maple, and Mathematica. Unlike these systems, every component of Sage is GPL-compatible. The interpretative language of Sage is Python, a mainstream programming language. Use Sage for studying a huge range of mathematics, including algebra, calculus, elementary to very advanced number theory, cryptography, numerical computation, commutative algebra, group theory, combinatorics, graph theory, and exact linear algebra.

    Explanation of what Sage does by the original author William Stein 

    (Long – roughly 50 minutes)

    Original Developer http://wstein.org/ and his history of Sage mathematical software development. Wiki listing http://wiki.sagemath.org/ with a list of participating commiters. Discussion lists for developers: Mostly done through Google Groups with associated RSS feeds. Mercurial Repository (start date Sat Feb 11 01:13:08 2006) Gonzalo Tornaria seems to have loaded the project in at this point. Current List of source code in TRAC with listing of commiters for the most recent release of Sage (4.7).

    • William Stein (wstein) Still very involved based on freqenecy of commits
    • Michael Abshoff (mabs) Ohloh has him ranked second only to William Stein with commits and time on project. He’s now left the project according to the Trac log.
    • Jeroen Demeyer (jdemeyer) commits a lot
    • J.H.Palmieri (palmieri) has done  number of tutorials and documentation he’s on the IRC channel
    • Minh Van Nguyen (nguyenminh2) has done some tutorials,documentation and work Categories module. He also appears to be the sysadmin on the Wiki
    • Mike Hansen (mhansen) Is on the IRC channel irc.freenode.net#sagemath and is a big contributor
    • Robert Bradshaw (robertwb) has done some very recent commits

    Changelog for the most recent release (4.7) of Sage. Moderators of irc.freenode.net#sagemath Keshav Kini (who maintains the Ohloh info) & schilly@boxen.math.washington.edu. Big milestone release of version 4.7 with tickets listed here based on modules: Click Here. And the Ohloh listing of top contributors to the project. There’s an active developer and end user community. Workshops are tracked here. Sage Days workshops tend to be hackfests for interested parties. But more importantly Developers can read up on this page, how to get started and what the process is as a Sage developer.

    Further questions that need to be considered. Look at the git repository and the developer blogs ask the following questions:

    1. Who approves patches? How many people? (There’s a large number of people responsible for reviewing patches, if I had to guess it could be 12 in total based on the most recent changelog)
    2. Who has commit access? & how many?
    3. Who is involved in the history of the project? (That’s pretty easy to figure out from the Ohloh and Trac websites for Sage)
    4. Who are the principal contributors, and have they changed over time?
    5. Who are the maintainers?
    6. Who is on the front end (user interface) and back end (processing or server side)?
    7. What have been some of the major bugs/problems/issues that have arisen during development? Who is responsible for quality control and bug repair?
    8. How is the project’s participation trending and why? (Seems to have stabilized with a big peak of 41 contribs about 2 years ago, look at Ohloh graph of commits, peak activity was 2009 and 2010 based on Ohloh graph).

    Note the period over which the Gource visualization occurs is since 2009, earliest entry in the Mercurial repository I could find was 2005. Sage was already a going concern prior to the Mercurial repository being put on the web. So the simulation doesn’t show the full history of development.

  • IRC as the Front channel

    Richard Stallman conference on free software t...
    Image via Wikipedia

    Assignment: Comment on using online synchronous communication as opposed to face to face communication

    For the exercise we did, trying to get someone to edit our Wiki user profile page, it definitely reminds me of the stories told about the first link on the Arpanet. They had two phone lines connected between USC and I think it was Berkeley and they would type one letter in and then ask if they were seeing it on their end. Then the IMP crashed and the rebooted it, started over and eventually typed out a Hello,…

    Brevity also seems to be the order of the day. Longer form kinds of things are way better suited to Wikis or Blog entries. If you can’t ask a question in a Twitter sized 140 characters or less, you might as well do an actual phone call or Skype or just email it. I guess I prefer the longer format generally when it comes to text. And as Mike Gage pointed out when you choose to do a private channel it’s essentially an IM client instead of IRC.

    On the upside however is when you are there, the immediacy cannot be matched especially if you want to through the IRC client into the background. Just knowing the people are logged in is kinda like having them in the room but at different desks or even just down the hall way. That’s a way greater assurance than waiting for a Discussion Board, Newsgroup or email return message. Or a Tweet for that matter, as the latency and delay of responses is still much slower than IRC. So you gotta pick the right tool for the right job. I’ll have to really try to figure out where it fits in when I’m working on stuff.

  • Whodunnit: An Exercise in Passive Voice (via The Daily Post at WordPress.com)

    Point taken, try to limit the use of ‘to be’ + ‘verb’ + ‘by’. I’m probably more guilty of this than most. That and the use of probably.

    We've all heard the non-apology "mistakes were made." Chances are that some of us have even used it when trying to admit a mistake without quite fessing up to it. This and similar phrases are so tempting because they're indirect about whodunnit. And they're indirect because they use a little thing called the passive voice. When talking about the passive voice, people often mention that it obscures the agent, which is just a fancy way of saying it … Read More

    via The Daily Post at WordPress.com

  • AppleInsider | Apple seen merging iOS, Mac OS X with custom A6 chip in 2012

    Steve Jobs while introducing the iPad in San F...
    Image via Wikipedia

    Rumors of an ARM-based MacBook Air are not new. In May, one report claimed that Apple had built a test notebook featuring the same low-power A5 processor found in the iPad 2. The report, which came from Japan, suggested that Apple officials were impressed by the results of the experiment.

    via AppleInsider | Apple seen merging iOS, Mac OS X with custom A6 chip in 2012.

    Following up on an article they did back on May 27th, and one prior to that on May 6th,  AppleInsider does a bit of prediction and prognosticating about the eventual fusion of iOS and Mac OS X. What they see triggering this is an ARM chip that would be able to execute 64-bit binaries across all of the product lines (A fabled ARM A-6). How long would it take to do this consolidation and interweaving? How many combined updaters, security patches, Pro App updaters would it take to get OS X 10.7 to be ‘more’ like iOS than it is today? Software development is going to take a while and it’s not just a matter of cross-compiling to an ARM chip from a software based on Intel chips.

    Given that 64-bit Intel Atom chips are already running on the new Seamircro SM10000 (x64), it won’t be long now I’m sure before the ARM equivalent ARM-15 chip hits full stride. The designers have been aiming for a 4-core ARM design that will be encompassed by the ARM-15 release real soon now (RSN). The next step after that chip is licensed and piloted, tested and put into production will be a 64-bit clean design. I’m curious to see if 64-bit will be applied across ALL the different product lines within Apple. Especially when the issue of power-usage and Thermal Design power (TDM) is considered, will 64-bit ARM chips be as battery friendly? I wonder. True Intel has jumped the 64-bit divide on the desktop with the Core 2 Duo line some time ago and made them somewhat battery friendly. But they cannot compare at all to the 10 hours+ one gets on a 32-bit ARM chip today using the iPad.

    Lastly, App Developers will also need to keep their Xcode environment up to date and merge in new changes constantly up to the big cutover to ARM x64. No telling what that’s going to be like apart from the previous 2 problems I have raised here. Apple in the 10.7 Lion run-up was very late in providing the support and tools to allow the developers to get their Apps ready. I will say though that in the history of migrations in Apple’s hardware/software, they have done more of them, more successfully than any other company. So I think they will be able to pull it off no doubt, but there will be much wailing and gnashing of teeth. And hopefully we’ll see something better as the end-users of the technology, something better than a much bigger profit margin for Apple (though that seems to be the prime mover in most recent cases as Steve Jobs has done the long slow fade into obscurity).

    If ARM x64 is inevitable and iOS on Everything too, then I’m hoping things don’t change so much I can’t do things similarly to the way I do them now on the desktop. Currently on OS X 10.7 I am ignoring completely:

    1. Gestures
    2. Misson Control
    3. Launch Pad
    4. AppStore (not really because I had to download Lion)

    Let’s hope this roster doesn’t get even longer over time as the iOS becomes the de facto OS on all Apple Products. Because I was sure hoping the future would be brighter than this. And as AppleInsider quotes from May 6th,

    “In addition to laptops, the report said that Apple would ‘presumably’ be looking to move its desktop Macs to ARM architecture as well. It characterized the transition to Apple-made chips for its line of computers as a ‘done deal’.”

  • First Sungard goes private and now Blackboard

    The buyers include Bain Capital, the Blackstone Group, Goldman Sachs Capital Partners, Kohlberg Kravis Roberts, Providence Equity Partners and Texas Pacific Group. The group is led by Silver Lake Partners. The deal is a leveraged buyout – Sungard will be taken private and its shares removed from Wall Street.

    via Sungard goes private • The RegisterPosted in CIO29th March 2005 10:37 GMT

    RTTNews – Private equity firm Providence Equity Partners, Inc. agreed Friday to take educational software and systems provider Blackboard, Inc. (BBBB: News ) private for $45 per share in an all-cash deal of $1.64 billion.

    It would appear now that Providence Equity Partners owns two giants in the Higher Ed outsourcing industry Sungard and Blackboard. What does this mean? Will there be consolidation where there is overlap between the two companies? Will there be attempts to steal customers or upsell each other’s products?

  • Google confirms Maps with local map downloads as iOS lags | Electronista

    A common message shown on TomTom OS when there...
    Image via Wikipedia

    Google Maps gets map downloads in Labs betaAfter a brief unofficial discovery, Google on Thursday confirmed that Google Maps 5.7 has the first experimental support for local maps downloads.

    via Google confirms Maps with local map downloads as iOS lags | Electronista.

    Google Maps for Android is starting to show a level of maturity only seen on dedicated GPS units. True, there still is no routing feature (you need access to Google’s servers for that functionality) But you at least a downloaded map that you can zoom out and in on to get a view without incurring heavy data charges. Yes, overseas you may rack up some big charges as you navigate live maps via the Google Maps app on Android. This is now solved partially by downloading in advance the immediate area you will be visiting (within a few miles radius). It’s an incremental improvement to be sure and makes Android phones a little more self sufficient without making you regret the data charges.

    Apple on the other hand is behind. Hands down they are kind of letting the 3rd party gps development go to folks like Navigon and TomTom who both require somewhat hefty fees to license their downloaded content. Apple’s Maps doesn’t compare to Navigon, TomTom, much less Google for actual usefulness in a wide range of situations. And Apple isn’t currently using the downloadable vector based maps introduced with this revision of Google Maps for Android vers. 5.7. So it will struggle with large jpeg images as you pan and scan around the map to find your location.

  • SeaMicro pushes Atom smasher to 768 cores in 10U box • The Register

    Image representing SeaMicro as depicted in Cru...
    Image via CrunchBase

    An original SM10000 server with 512 cores and 1TB of main memory cost $139,000. The bump up to the 64-bit Atom N570 for 512 cores and the same 1TB of memory boosted the price to $165,000. A 768-core, 1.5TB machine using the new 64HD cards will run you $237,000. Thats 50 per cent more oomph and memory for 43.6 per cent more money. ®

    via SeaMicro pushes Atom smasher to 768 cores in 10U box • The Register.

    SeaMicro continues to pump out the jams releasing another updated chassis in less than a year. There is now a grand total of 768 processor cores jammed in that 10U high box. Which leads me to believe they have just eclipsed the compute per rack unit of the Tilera and Calxeda massively parallel cloud servers in a box. But that would wrong because Calxeda is making a 2U server rack unit hold 120-4 core ARM cpus. So that gives you a grand total of 480 in just 2 rack units alone. Multiply that by 5 and you get 2400 cores in a 10U rack serving. So advantage Calxeda in total core count, however lets also consider software too. Atom being the cpu that Seamicro has chosen all along is an intel architecture chip and an x64 architecture at that. It is the best of both worlds for anyone who already had a big investment in Intel binary compatible OSes and applications. It is most often the software and it’s legacy pieces that drive the choice of which processor goes into your data cloud.

    Anyone who had clean slate to start from might be able to choose between Calxeda versus Seamicro for their applications and infrastructure. And if density/thermal design point per rack unit is very important Calxeda too will suit your needs I would think. But who knows? Maybe your workflow isn’t as massively parallel as a Calxeda server and you might have a much lower implementation threshold getting started on an Intel system, so again advantage Seamicro. A real industry analyst would look at these two competing companies as complimentary, different architectures for different workflows.

  • NoSQL is What? (via Jeremy Zawodny’s blog)

    Image representing Jeremy Zawodny as depicted ...
    Image by Flickr / Jeremy Zawodny via CrunchBase

    Great set of comments along with a very good description of advantages of using NoSQL in a web application. There seems to be quite a bit of philosophical differences over whether or not NoSQL needs to be chosen at the earliest stages of ANY project. But Jeremy’s comments more or less prove, you pick the right tool for the right job, ‘Nuff Said.

    Jeremy Zawodny: I found myself reading NoSQL is a Premature Optimization a few minutes ago and threw up in my mouth a little. That article is so far off base that I’m not even sure where to start, so I guess I’ll go in order. In fact, I would argue that starting with NoSQL because you think you might someday have enough traffic and scale to warrant it is a premature optimization, and as such, should be avoided by smaller and even medium sized organizations.  You … Read More

    via Jeremy Zawodny’s blog

  • Apple patents hint at future AR screen tech for iPad | Electronista

    Structure of liquid crystal display: 1 – verti...
    Image via Wikipedia

    Apple may be working on bringing augmented reality views to its iPad thanks to a newly discovered patent filing with the USPTO.

    via Apple patents hint at future AR screen tech for iPad | Electronista. (Originally posted at AppleInsider at the following link below)

    Original Article: Apple Insider article on AR

    Just a very brief look at a couple of patent filings by Apple with some descriptions of potential applications. They seem to want to use it for navigation purposes using the onboard video camera. One half the screen will use the live video feed, the other half is a ‘virtual’ rendition of that scene in 3D to allow you to find a path or maybe a parking space in between all those buildings.

    The second filing mentions a see-through screen whose opacity can be regulated by the user. The information display will take precedence over the image seen through the LCD panel. It will default to totally opaque using no voltage whatsoever (In Plane switching design for the LCD).

    However the most intriguing part of the story as told by AppleInsider is the use of sensors on the device to determine angle, direction, bearing to then send over the network. Why the network? Well the whole rendering of the 3D scene as described in first patent filing is done somewhere in the cloud and spit back to the iOS device. No onboard 3D rendering needed or at least not at that level of detail. Maybe those datacenters in North Carolina are really cloud based 3D rendering farms?