The reason many developers become involved in open source projects is to “scratch an itch”, to fix a problem they are having with some software (or lack of existing software) when trying to accomplish something. Wikipedia works in much the same way, readers who see problems, omissions or errors (at least in theory) become editors fixing the problems. In general this works pretty well in terms of producing excellent output which meets the needs of the community, at least those contributing.
Textbooks have long been the domain of publishers and authors contracted by those publishers. K-12 textbooks especially have been notoriously inaccurate and out of date when it comes to sciences and technology. I think that openly licensed digital textbooks are set to change that. Much of the outdated information comes from the long lead time for traditional textbook editing and printing. When inaccurate information is found it is often either too late to do anything about it or the person finding it does not know who to contact to get the error corrected (or the person they contact can’t or won’t do anything about it).
The advantage of an openly licensed textbook is that any teacher or researcher who finds an error can correct it and republish the updated book. These “patches” can either be integrated upstream or “forked” in much the same way as is done with open source code. We certainly want schools and teachers using these textbooks to vet them prior to implementing them in the classroom, but this is not a problem with the model itself, it is just a shift from trusting the publisher to vet the textbook (was that ever a good idea anyway?) to trusting the school or district to do it. In some cases it may make sense to have some experts “certify” versions of the textbook as accurate as well. These certifications might also provide some kind of revenue model.
In addition, the fact that digital textbooks can be easily corrected and re-distributed or updated as new technologies emerge and discoveries are made helps keep information up-to date and as accurate as possible. Don’t get me wrong I think it will still be quite a number of years before schools are replacing all their textbooks, but the cost savings and advantages are significant so I suspect it will begin to happen sooner rather than later.
Two websites which are helping to move things in this direction and which are providing some digital textbooks to schools now are the California Learning Resources Network Free Digital Textbook Initiative and CK-12 Foundation.
For a couple of years now the Gopher Amateur Radio club has been provding scanner feeds of the digital APCO-P25 in the Minneapolis/St. Paul metro area to radio enthusiasts around the world through the ScanAmerica.us website (now a part of the RadioReference.com family). In fact, I specifically purchased the fabulous GRE PSR-600 to replace the Radio Shack PRO-2096 because it allowed us to transmit the talkgroup identifier as well as the audio though some hacks attaching the remote control software to the stream broadcast software.
Thanks to K1PGV it looks like there is now some software called ScannerCast which integrates the polling function to get the talkgroup ID for both Uniden and GRE scanners which support it and tag the audio with the talkgroup. This is a much neater solution, if a little bit more limiting in that it doesn’t allow for simultaneous remote control of the radio.
As regular readers will know I’m quite interested in the sharing of information. This includes support for projects such as the Internet Archive. One such example of their fantastic work is the sharing of public domain books which have been scanned, many by the ill-fated Microsoft book scanning project. Unfortunately, the Google Books project is (ironically perhaps) much more restrictive on licensing. My own feeling is that these books should be shared as much as possible and attaching new licensing restrictions to public domain books just because you have scanned them is ludicrous.
One potential solution to this is to scan books yourself but those that have done even more than a few pages of a book on a flatbed scanner know that this is a long a tedious process. The commercial scanners used by Microsoft, Google and the like are much more efficient but also much more expensive. Luckily the do-it-yourselfers have come along with their own book scanning solution. It’s not as elegant as the commercial scanners but it’s definitely inexpensive. Personally, I’m hoping someone comes along with something in the middle. A pre-built book scanning frame and platen which you can add cameras and accessories to and use with the open source software which has been developed.
You can watch the original Instructables video on building a DIY book scanner or visit the new DIY Book Scanning site with news and forums.
If you’re brave and resourceful or just desperate enough it is often possible to bring back electronics from the dead after they have had liquid spilled on them. I’ve recently been working on a friend’s laptop which had water spilled on it. I’ve gone from it doing nothing at all to being apparently fully operational with the exception of the LCD backlight which I’m still working on. Note this can be a long process which requires drying time and lots of patience. This could easily stretch to weeks and many hours of assembly and dis-assembly work which is why it’s often a better bet to just replace the problematic component.
My own process involves dis-assembling everything and giving it a good scrubbing with a toothbrush and some very pure electronics grade alcohol followed by a scrubbing with contact cleaner. I then let everything dry out for a day or two and repeat. Re-assemble everything and test. Many times only certain things will not be working anymore which can give you some hints as to where to look for corrosion and areas of the printed circuit board which need further attention. It may take several iterations of this process before you arrive at something useful. One bit of good news about those small surface-mounted components which are difficult to replace in the field is that they are much easier to scrub down than their larger counterparts.
For more information and tips on bringing back liquid damaged electronics I suggest reading this article at GRYNX.
I love Google Apps, the sevice which hosts email, calendaring, online docs and chat for your own domain name free of charge at Google. Sure there are some disadvantages and I still don’t like working in the Google Docs word processor too much but the email and calendaring are great. I also love the iGoogle service where you can customize a homepage with RSS feeds and include a Gmail and Google Docs module.
The problem is that internal communication at Google between these projects is pretty much non-existant, at least it appears that way. iGoogle integrates wonderfully with the consumer Gmail and Google Docs service but not at all with the hosted Google Apps service which is becoming even more popular as companies, particularly universities and colleges, around the world start to adopt it. One of my customers was particularly annoyed that documents created with their hosted app couldn’t show up on their iGoogle page and asked me to find a solution. After much frustrations the best solution I could come up with is a series of hacked together iGoogle apps which replicate as much of the functionality as possible from the official Gmail, Calendar and Google Docs iGoogle apps.
1) Google Docs for hosted domains iGoogle gadget
2) Google Mail for hosted domains iGoogle gadget
3) Google Talk for hosted domains iGoogle gadget
4) Google Calendar for hosted domains iGoogle gadget
5) Google Tasks for hosted domains iGoogle gadget
Of course, this is really just a workaround and doesn’t do as nice a job of integration as the regular apps do. In related news the Google account authorization is a mess because of this as well. For example, you can log into the iGoogle site (and other Google services) using a non-gmail address which is linked to a regular gmail account but this is confusing if you also have a hosted apps mail account on that domain. A mess all around. Google…if you’re listening you really need to make some improvements to how your hosted apps service integrates with the rest of your services and sites!
This summer I decided to explore a bit more about packet radio. You know, transmitting small amounts of information over amateur radio frequencies at very slow speeds. Prior to the advent of fairly ubiquitous high speed Internet connections this was a popular way for hams to move around information related to their activities. As it turns out the need for a dedicated packet modem (TNC) has gone away and the functionality can now be replaced with software and a sound card. One of my specific interests though was in learning about and improving the APRS system in the Twin Cities which allows people to automatically send GPS position information along with remote weather station packets and a variety of other small pieces of data. As it turns out APRS data can also be sent and received with a soundcard attached to a radio, forgoing the TNC and reducing the barrier to entry.
The primary program for soundcard TNC emulation is a free program called AGWPE and my first task was to learn how to setup and configure it as some of my firends had tried and failed to get it working in the past. Thanks to the great resources on the Internet I came upon the website of KC2RLM which contained excellent instructions for setting up and using the AGWPE program as well as a number of applications which would interact with the software.
Once I had gotten the basic soundcard TNC software up and running the next step was to start experimenting with APRS software. I tried lots of software including the common UI-View32 software but found the easiest to use and most modern software to be AGWTracker, a trialware program by the maker of AGWPE which supports all kinds of map layers such as Google Maps, MapPoint, etc. greatly improving the user experience. Of course, if you want to send updates back to the APRS-IS Internet network you need a password to go along with your callsign. As it turns out the password is a simple hashing algorithm designed just to prevent non-hams for injecting data into the network and is a documented algorithm. You can get your own APRS password or learn more about the algorithm at the APRS Password Generator website.
I’m not one who really enjoys writing code. In fact, I tend to avoid it whenever possible as it’s just not that interesting to me. That said from time to time I have projects to complete where no existing software quite meets my needs and writing my own makes the most sense. This summer I was in the midst of collecting information and publishing a large extended family directory.
Clearly, the best way to do this is to store the data in a database. Because relational databases are most prevalent, I already have quite a bit of experience with MySQL and I knew I wanted to eventually make the data available via a PHP web application I decided to go with a MySQL backend. The trick here is that families aren’t as relational as you might originally think. Families are really hierarchical data (parents, children, grandchildren, etc.) and relational databases have a hard time storing and recalling things based on these types of relationships. For example, a typical family query might be one for a list of all descendants of someone. As it turns out families aren’t the only hierarchical data people might try to store in a relational database. Any business database which shows managers and reporting personnel is also a hierarchical data set albeit usually with fewer levels and much less complication (no need to track divorces, remarriages, spouses, etc.)
I was able to find and read up on several methods for storing the relational information but the one I ended up using is that proposed by Rob Volk of SQLTeam where the expensive (computationally) hierarchical relationships are stored in two additional fields for each record (depth and lineage) with parseable separators between generations. The database is initially loaded through a series of several (expensive) recursive queries but then can be maintained through triggers when information is added or updated.
In addition to my technical interests I’m involved in many other things including music. Great advances have been made in the integration of music and computers in the last 20 years. As with many other industries, computers have changed the way music is created, disseminated and recorded. One of the many areas where computers have been used is in the typesetting of sheet music. This has allowed for major music publishers to reduce costs and composers, arrangers and musicians to widely disseminate music without the need for an engraver and publisher.
Like many advances the transition from hand engraved music scores to computer typeset scores has not been without problems. Most notably computer typeset music can be harder to read and interpret by musicians and simply doesn’t look as musical or beautiful as hand engraved music. LilyPond has an excellent and illustrative essay on the problems of computerized engraving which I encourage you to read.
The Finale program by Make Music (formerly Coda Music) and Sibleus by Avid are undoubtedly the biggest players in the music notation software field but both suffer from the problems of automated engraving (lacking life, more difficult to read and interpret, etc.) but what is a composer/arranger to do? Assuming you can’t afford to hire a professional hand engraver (if you can even still find one) the answer is quite clear actually. Lamenting the decline of music score quality a group of developer/musicians got together and wrote a software program called LilyPond. When used correctly this program produces some of the most beautifully engraved music you can find. Best of all the program is free and open-source meaning that anyone can see the code that is used to generate the music and contribute fixes and enhancements. The program is cross-platform and can work on Linux, Macintosh OSX and Windows (though Linux and Macintosh are admittedly easier to get running).
As it turns out there is a bit of a catch though… While programs like Finale and Sibleus offer a graphical notation editor, LilyPond is a specialized program which does nothing more than automatic engraving (typesetting) based on a textual input file. This means to use LilyPond you need to learn how to manually describe the music in the LilyPond formatted text file, a significant impediment for many composers/arrangers. Or do you?
Recently there has been quite a bit of movement on developing some free and open source music notation programs, many of which support exporting to the LilyPond format. While the exported LilyPond file may still need some manual tweaking to get the best possible score it’s certainly much easier than entering an entire score into LilyPond format by hand.
One of the oldest and best known LilyPond compatible editors is NtEd (of NoteEdit lineage) by J. Anders. Though not immediately apparent from the feature list, this Linux software supports exporting to LilyPond. A more recent entrant to the music notation space is Denemo, a multi-platform WYSIWYG notation editor designed specifically for LilyPond. Finally, one of the most promising solutions seems to be MuseScore, a project stemming from the MusE Midi/Audio sequencer. MuseScore looks to be one of the most consumer friendly options with packages for Linux, Macintosh and Windows readily available and a fairly decent engraving engine itself as well as LilyPond support.
Last week I served as a guest panelist for This Week in Law (aka TWiL) and Episode 20 is now available for download. I think it’s a pretty good episode so if you’re at all interested in the legal challenges of dealing with technology I hope you’ll give it a listen. I had a great time with my fellow panelists Evan Brown, Colette Vogele, Ernie Svenson and host Denise Howell recording this episode.
The discussion points are available at delicious. We discussed several contemporary topics including the debate about iPodMeister which trades you an iPod for all your used CDs while still giving you a copy of all your music ripped from the CDs, the appropriate use of social networking, Obama’s Blackberry, P2P networks and the decline of the recording industry, as well as many more topics. I think it’s worth a listen but don’t take my word for it, check it out yourself at TWiT.tv/twil20.
There are many technologies which I am very much on top of because I use them on a regular basis, here are others that I interact with periodically and it’s enough to stay abreast of developments and do basic troubleshooting but from time to time there are technologies that I’m only peripherally aware of and have only a basic understanding of. One such technology is virtualization or virtual machine software.
For almost ten years I’ve been hearing about software like that made by VMware which allows for a virtual computer to run inside of a host operating system. To this day I haven’t done anything more with this type of software than to fire it up and see that indeed it does work. It’s not that I don’t see the advantages, it’s just that I haven’t personally encountered a situation where I can justify the time and effort it would take to set it up. That said I do like to know what’s going on in all areas of technology and what I’ve been hearing lately is some movement in the open source virtualization arena.
For some years now I’ve known about some projects such as Xen, Bochs and QEMU. The problem with these solutions is they are really not open source replacements for commercial virtual machine software like VMware. I’ve heard great things about Xen and it’s ability to virtualize Linux systems (on Linux systems). While this is valuable in many cases it’s not for most of what I want to do which is to run a guest OS on an entirely different host OS. Bochs is more on target but this is an effort to emulate the x86 platform enitrely in software, a bit heavy duty (and with significant speed costs) for what I normally would want to do which would be to run an x86 guest OS on an x86 host, for example a Windows guest on a Linux host. QEMU has the upper hand here. While it’s still a big heavy emulator there is some closed source accelerator code which can help in x86 on x86 situations. Of course the closed source part is a bit of a drag. Still the real problem with all of these is that they are incredibly more difficult to configure (and especially to configure and setup a new guest OS on) than their commercial counterparts.
Well, the world may be changing. What I’ve been hearing recently is that an open source project from Sun called VirtualBox is looking like it will give some of the commercial vendors a run for their money (so to speak). There is no doubt that VirtualBox is still in the early stages of life but the development team seems to be putting some real effort into it and new releases have been timely. I’ll be excited to follow the continual development of this product.