Taking the JAVA plunge – into the last mile

After some hard work at getting my ideas together and then some marathon debugging sessions, my application is almost working. It has taken me a lot longer than I envisioned – for two real reasons.

Firstly, for a long time I had an approximate idea of what screens and what workflow I wanted in the application but I was finding it difficult to make it concrete. When I didn’t have a clear picture of what I was aiming at, my development didn’t proceed.

After some hard work at getting my ideas together and then some marathon debugging sessions, my application is almost working. It has taken me a lot longer than I envisioned – for two real reasons.

Firstly, for a long time I had an approximate idea of what screens and what workflow I wanted in the application but I was finding it difficult to make it concrete. When I didn’t have a clear picture of what I was aiming at, my development didn’t proceed.

In the end, I found that to draw out the screen layouts and then merge them onto a single page as the best thing to do. I then printed them out and annotated the copies with the actions and how they were going to control variables across the application. This picture shows the layout that I drew (which is close to what I will end up with).

Unfortunately the picture that I had placed here in this blog entry has been lost

I found Inkscape www.inkscape.org under both Windows and Linux as a really good tool to do this. I first drew each page as a separate drawing, and then used the import tool to pull them all together in a single drawing.

Unfortunately, I found Inkscape cumbersome for drawing arrows to represent the transitions between the pages. Initially I just used marks on the printed copies, but these got difficult to see with so many scribbles, so in the end I used KIVIO under kde to draw single simple boxes for the pages and coloured arrows for the flows between them.

update I now use DIA for this, and export the result as an SVG which I can then size and export as a PNG image using Inkscape

The second reason for lack of progress was trying to be too clever with not creating a session objects to store persistance data in. I was trying to carry the data between pages, and getting screwed up when the page was being recreated without access to the information I wanted. As soon as I created a few Tapesty “Application State Objects” for my session, and just got standard forms to load data to and from these objects everything became a lot simpler.

Now just trying to complete the javascript for a dynamic piece on my Edit page, looking at using CSS for control of the formats on all pages (and using the same to provide mouse hover tool tips) and I will be moving on to production deployment.

I’ll report on that soon.

Taking the JAVA plunge – some weeks later

It has been a while since my last entry, but here I am sitting in a hotel room in Instanbul, and discovered that it has a wireless lan capability.

I have started my first application having picked Tapestry 4.0 (currently at beta 5, but upgrading as soon as new versions come out) and iBATIS to connect to the database. I have been able to access my database and display pages with the data installed.

It has been a while since my last entry, but here I am sitting in a hotel room in Instanbul, and discovered that it has a wireless lan capability.

I have started my first application having picked Tapestry 4.0 (currently at beta 5, but upgrading as soon as new versions come out) and iBATIS to connect to the database. I have been able to access my database and display pages with the data installed.

I sort of stuck with Eclipse, but used the version bundled with Web Tools Platform. This is combination seems to be giving me the ability to run and then test (under Tomcat controlled by Eclipse) my application is run in debug mode.

Now that I have got to use tapestry in practice it is very easy to use, and have been able to make a good start on my “Family Tree” application.

Having looked at Hibernate and Cayenne as database frameworks, I decided to stick with iBATIS as my database access framework. It seems to be that right combination of simple to understand yet powerful in use.

Taking the Java Plunge – Selecting and using an IDE

It has taken a my about 3 weeks of exploring to know what my next step in taking the Java Plunge should be.

I have set up both Netbeans (4.1) and Eclipse (3.1) IDEs and tried to get both of them to create simple Hello World JSP applications.

Debian packages for both of them seem out of date, so I have downloaded both from their respective web sites.

Netbeans initially seemed extremely simple, as I immediately understood all the windows, what they did and how to create an application.

It has taken a my about 3 weeks of exploring to know what my next step in taking the Java Plunge should be.

I have set up both Netbeans (4.1) and Eclipse (3.1) IDEs and tried to get both of them to create simple Hello World JSP applications.

Debian packages for both of them seem out of date, so I have downloaded both from their respective web sites.

Netbeans initially seemed extremely simple, as I immediately understood all the windows, what they did and how to create an application.

However, at the same time I have been exploring exactly how I was going to construct my web applications, and am currently favouring Tapestry (3.0.3) as the correct framework to use. This has a plugin for Eclipse called Spindle which should aid development, and so I have decided to invest the time and effort in working my way through the Eclipse tutorials.

The more I get into it, the more I understand how to create and debug java applications and what all the different windows mean. Confusing at first – but very useful once you understand some of the concepts.

I think I am going to stick with Eclipse.

Taking the Java Plunge – Finding Out What To Do

Thats it – I just have to start it. 5 years after adding it to my TODO list for my home computer network I have started my first java web application. I want to try and build a “family tree” application to capture and record in a database my family tree. I will then provide a tool for dumping this data into an ascii format (probably xml) so that it will not be lost to future generations.

Asking what I need on the Debian User mailing list gathered the usual helpful responses (particularly because Debian can’t support the non free Sun licences).

Thats it – I just have to start it. 5 years after adding it to my TODO list for my home computer network I have started my first java web application. I want to try and build a “family tree” application to capture and record in a database my family tree. I will then provide a tool for dumping this data into an ascii format (probably xml) so that it will not be lost to future generations.

Asking what I need on the Debian User mailing list gathered the usual helpful responses (particularly because Debian can’t support the non free Sun licences).

The general message I got was

  • Use the Sun run time and sdk libraries – they are the best
  • Choose Netbeans or Eclipse as the IDE
  • Server side you use Apache, with Tomcat to serve JSP and servlets

There is a helpful web site which says how to make debian packages out of the sun downloads, so they get installed according to Debian standards

UPDATE: The link in the previous paragraph doesn’t work, and is anyway irrelevant since Sun open sourced Java and Debian put the Sun runtime into its repositories

Next job – learn the language

Backup and Archiving at Home

I have several computers at home, and it is important that they are properly backed up in order to not lose data. I want to show an example of how this is done, but first a number of preliminaries.

I have several computers at home, and it is important that they are properly backed up in order to not lose data. I want to show an example of how this is done, but first a number of preliminaries.

  1. I have defined that backups should, where possible, be placed on a different disk to the source. Thus I should not lose data if I have a disk corruption or a hardware failure.
  2. There are certain directories (for example /etc, and the subdirectoy mydocs in my home directory) which am changing the files and would like to keep changes to those files so that I can revert, or insure that when I delete them a copy is archived for posterity.
  3. I break down my file layout into separate filesystems, and in particular, I have separated out:-
    • the backup directory (well it is on another disk)
    • my home directory
    • certain directories (particularly on my server) which are likely to contain massive amounts of data (such as /var/lib/svn where all the svn repositories lie)
  4. Where possible I am using lvm to manage most partitions as logical volumes, so creation, deletion and resizing of them is easy.
  5. Once a file changes in one of the special directires (such as /etc), the copied file is stored in on of several snapshot directories related to points back in time. I have
    1. the latest snapshots
    2. daily snapshots from yesterday – up until one week old
    3. weekly snapshots up to one month old
    4. monthly snapshots up to 6 months old
    5. older than six months are assumed to be queueing for eventual manual writing to CD for keeping for ever.

So how do I do it.

Firstly, simple backup is done using rsync with the -aHxq and –delete switches. This cause the destination directory (and subdirecties) to become a copy (ie a backup) of source directory (and subdirectories). The -x switch limites this to a single filesystem. Where I need to keep the changes to a specific directory then I also use the –backup-dir switch to write them into the latest snapshots directory.

Archiving the snapshot directory is done daily just before the backup (so its actually part of a daily backup script that is run creating the script file as /etc/cron.daily/backup). This snapshot is turned into the daily snapshot by simply using mv to change the name of the directory from snap to daily.0 (or course daily.0 should have already been renamed to daily.1 before hand). Similar backup scripts for archiving only are placed in /etc/cron.weekly and /etc/cron.monthly)

The interesting trick comes when merging a daily snapshot into an already existing weekly snapshot (or weekly into monthly, or monthly into the CD archive). By using cp -alf this just makes an additional link in the weekly snapshot to the file already in the daily snapshot (so it happens fast as there is not file copying). Where a file already existing in the weekly snapshot it is replaced by the link (this effectively overwriting the old version), where a file didn’t already exist a new link is simply created. If the old daily snapshot is removed at this point, then this just unlinks the file from the daily snapshot but leaves it in the weekly.

So here is the relevent code from the files

/etc/cron.daily/backup

#!/bin/sh

logger -t "Backup:" "Daily backup started"
ARCH=/bak/archive

if [ -d $ARCH/daily.6 ] ; then
if [ ! -d $ARCH/weekly.1 ] ; then mkdir -p $ARCH/weekly.1 ; fi
# Now merge in stuff here with what might already be there using hard links
cp -alf $ARCH/daily.6/* $ARCH/weekly.1
# Finally loose the rest
rm -rf $ARCH/daily.6 ;

fi
# Shift along snapshots
if [ -d $ARCH/daily.5 ] ; then mv $ARCH/daily.5 $ARCH/daily.6 ; fi
if [ -d $ARCH/daily.4 ] ; then mv $ARCH/daily.4 $ARCH/daily.5 ; fi
if [ -d $ARCH/daily.3 ] ; then mv $ARCH/daily.3 $ARCH/daily.4 ; fi
if [ -d $ARCH/daily.2 ] ; then mv $ARCH/daily.2 $ARCH/daily.3 ; fi
if [ -d $ARCH/daily.1 ] ; then mv $ARCH/daily.1 $ARCH/daily.2 ; fi
if [ -d $ARCH/snap ] ; then mv $ARCH/snap $ARCH/daily.1 ; fi

# Collect new snapshot archive stuff doing daily backup on the way

mkdir -p $ARCH/snap
...

/etc/cron.weekly/backup

#!/bin/sh
#	AKC - see below for history

ARCH=/bak/archive
if [ -d $ARCH/weekly.5 ] ; then
#  if any of the files only have one hard link, it needs to be passed on
if [ ! -d $ARCH/monthly.1 ] ; then mkdir -p $ARCH/monthly.1 ; fi
# Merge into monthly archive
cp -alf $ARCH/weekly.5/* $ARCH/monthly.1
# Shift along snapshots
rm -rf $ARCH/weekly.5
fi

if [ -d $ARCH/weekly.4 ] ; then mv $ARCH/weekly.4 $ARCH/weekly.5 ; fi
if [ -d $ARCH/weekly.3 ] ; then mv $ARCH/weekly.3 $ARCH/weekly.4 ; fi
if [ -d $ARCH/weekly.2 ] ; then mv $ARCH/weekly.2 $ARCH/weekly.3 ; fi
if [ -d $ARCH/weekly.1 ] ; then mv $ARCH/weekly.1 $ARCH/weekly.2 ; fi
...

/etc/cron.monthly/backup

#!/bin/sh
#	AKC - see below for history

ARCH=/bak/archive
CDARCH=/bak/archive/CDarch-`date +%Y`
MACH=piglet

if [ -d $ARCH/monthly.6 ] ; then

if [ ! -d $CDARCH ] ; then mkdir -p $CDARCH ; fi
cp -alf $ARCH/monthly.6/* $CDARCH

rm -rf $ARCH/monthly.6
fi

# Shift along snapshots

if [ -d $ARCH/monthly.5 ] ; then mv $ARCH/monthly.5 $ARCH/monthly.6 ; fi
if [ -d $ARCH/monthly.4 ] ; then mv $ARCH/monthly.4 $ARCH/monthly.5 ; fi
if [ -d $ARCH/monthly.3 ] ; then mv $ARCH/monthly.3 $ARCH/monthly.4 ; fi
if [ -d $ARCH/monthly.2 ] ; then mv $ARCH/monthly.2 $ARCH/monthly.3 ; fi
if [ -d $ARCH/monthly.1 ] ; then mv $ARCH/monthly.1 $ARCH/monthly.2 ; fi

...

UPDATE: As of 26th February 2011 the basic mechanisms show in this post are still in use.  However some detail is wrong  (this disk layout and partitions).  Nothing that detracts from the basic message.  See also my recent post about keeping personal data backed up

Open File Formats

The state of Massachusetts is defining that all government documents should be in an open format. Quite right too. Any government department should ensure that all documents are produced in a form that anyone can read the data – for ever and without payment to any third party licence fees.

The state of Massachusetts is defining that all government documents should be in an open format. Quite right too. Any government department should ensure that all documents are produced in a form that anyone can read the data – for ever and without payment to any third party licence fees.

The problem with the approach that they are taking is that defining that Microsoft Office XML standards are open. It also appears that Microsoft appears to be offering a licence to read this documents that confirms to this open standard. However, I think this openness is illusary and should not be allowed. For me the key reasons are

  • The licence is extremely tightly worded to imply that whilst you might be able to develop software to read these formats, you can’t distribute this software to others
  • You can’t write software to write to these formats
  • The formats are defined arbitarily by Microsoft and are not guarenteed not to change.

All this means that sometime in the future it could well be that future generations do not have access to applications which can read and manipulate these documents.

There is a standard, OASIS Open Document Format for Office Applications (OpenDocument), that can be used to xchange documents across the network. The next release of openoffice.org (2.0) will support this as its default standard.

I would like to encourage everyone to adopt this standard as their default exchange mechanism. If we can build up enough momentum behind this, then a few years down the line we will have a standardised mechism everyone can use – and hopefully prevent archive material disappearing never to be readable again

A New Vision for the Desktop

Introduction

As I start to write this article at the end of March 2005 I have in my head a partially formed vision for how to improve the usability of the Personal Computer Desktop. I will use this article so slowly explore that vision and try and turn it into a complete concept. I will do that in stages, so that ultimately this becomes a multipage specification for the complete vision of a new version of the way the PC desktop operates. As I publish each update, I will reset the date of the article so that it reappears at the top of the list of articles on this site.

Introduction

As I start to write this article at the end of March 2005 I have in my head a partially formed vision for how to improve the usability of the Personal Computer Desktop. I will use this article so slowly explore that vision and try and turn it into a complete concept. I will do that in stages, so that ultimately this becomes a multipage specification for the complete vision of a new version of the way the PC desktop operates. As I publish each update, I will reset the date of the article so that it reappears at the top of the list of articles on this site.

The current desktop environment has a fairly consistent approach. Each application generally, although not always, has a single window for most of the users working. This window has borders, possibly with scroll bars, and at the top is a window titile bar, a menu bar, and often a toolbar. At the bottom is often a status bar of some kind.

The overall desktop generally consists of a full screen area on which futher icons are placed, some representing starting points for some activity, others files put there by the user. On one edge is some form of panel with lists of running tasks, a menu to start new ones, some form of notification area (the system tray) and potentially a quick launch section with the icons of frequently started applications.

Also, just as important is they way that each window has a postion in terms of its on topness compared with every other, and that the on top window is both

  • the one that has the focus for keyboard and mouse input (although there are exceptions), and
  • is opaque, and obscures what is beneath it. So although we have a 3D concept, its very limited and is more two dimensional in nature and use.

I think this is basically a function of the lack of power of the graphics when these concepts where first put in place. What I want to do is revisit this approach when we are in a world in which most of the hardware can

  • provide more flexibility in terms of transparency
  • the ability to make use of 3D perspectives. So I want to take these additional capabilities and explore how we might make the users life easier.

How does the user work?

Document Centric v Application Centric

When Apple first introduced the Lisa, and then the Mac, to the world it established an approach to the desktop in which users were supposed to see documents in folders, and work with them. The application was hidden, and just somehow automatically connected to the documents. Users were supposed to neatly file those documents into a folder hierarchy and never really know that an application existed.

I can’t speak for the Mac community as I don’t know anyone who uses a Mac these days, but I can speak about many of my collegues at work who use Microsoft Windows, and they definately do not work that way. They think application. For instance, if I ask a collegue to give me a copy of his PowerPoint presentation he will often start PowerPoint, open up the presentation from the application and use Save As to copy it on to (say) a memory stick. Now I don’t work that way, but many do. Definitely not the way the original designers thought they might.

Why? Why do people see things this way? I don’t really know, but here are three theories:-

  • In the Windows world you generally have to pay money to buy an application, and the vendors marketing department therefore a boosting the importance of an application as opposed to the documents it creates (see how I used the term PowerPoint Presentation above)
  • These days, the two key applications for most people are e-mail and web browser. Neither of these connect directly to documents stored on the desktop.
  • To line up windows so that you can copy file a to location b is really difficult. It much simpler to do it using the application, or using windows explorer where one pane has the tree structure in it and can be used as the destination of a move/copy command.

So in my vision, I think we have to give more credence to the concept of an application (or rather the function of doing something – I’ll talk about that later) as a key driver, although that does not mean that we must forget the document angle either (again more on that later).

Multitasking

The original concept of users operating on several things at once has lead to the development of a desktop in which the user is expected to have have multiple windows open at the same time, being able to switch between them at will.

I think the reality is different. I think that normally users focus on only one task at a time. However, it is quite important to understand that they might simulatenously be wanting to monitor some other process in the background whilst concentrating on that one task. In saying that, it is a sort of monitoring that is more than just has this event occurred, but might be more akin to continually judging progress whilst neverthe less concentrating on the main activity. Todays windowing systems insist that you leave some space on your screen for monitoring window because of the limitations on where keyboard and mouse focus works when

Switching between tasks (or starting a new one) is also an important component of the user experience. Right now, things seem to be inconsistent between applications, in that those with a document focus require you to explicity save the document to keep changes, whilst those that don’t have a document focus in quite the same way (for instance your mail program) just work.

Locating Information

A key element of efficient working is to rapidly locate a piece of information to work on. Traditional methods use the file system and effectively allow the user to build a tree structured hierarchy to locate the information. But in reality he doesn’t

The alternative that is being talked about is to allow the user to define metadata with the file, and then to use a search engine to help him locate the information. Having seen the search engine approach in use, I do not believe it works very well. They key problem is finding documents that you know are there, but for which the search criteria seem to be wrong. In this case, browsing is a necessary component.

In analysing my own approach to finding information, I think there are number of criteria that we use naturally. Lets explore each of them in the subsections below

Type of information

In most cases we remember the type of information that we are searching for. These are generally coupled with the application although not necessarily known that way. So we think of an e-mail message, a document, a slide show, an audio file or a movie (or any other file – such as saved game …).

Time of creation

First off, I think there are two categories of time that we are talking about. Firstly, there is time for its own sake. Phrases such as last week, last month, or two days ago are all in this category. I would submit that our memories are a little hazy when it comes to remembering time, and the best that can be achieved is one unit back in time where a resolution of about 1:6 to one. In otherwords, we can remember today and yesterday, but then the resolution needs to drop by approx 6. So we remember this week, last week and then drop to this month, last month and then drop to this half year, last half year.

The second mechanism is links via events (either spot events or ones that cover a period). So we can remember “at the meeting with Customer X”, or whilst I was working at.

Subject Area of document

Here, I think, is an area where users do need to have a mostly heirarchical model of subject areas that they can use to file information in.

[Need to think about exceptions to this rule]

Some new concepts

Classes of Task

I think the first thing we need to do is get away from the old concepts of applications, and think instead in terms of tasks. Then of each of these tasks classify into different types according to the user concentration on them. So, as an initial list:-

  • Working on a specific document to create, view or edit it (at this stage, forget whether the document is an e-mail message, writing on a page, a drawing, or whatever. The important point is that there is a focus of attention on a specific area of the screen, and that there needs to be some tools with which to manipulate the document. Although the term document is used here, I think it can also be other media. For instance, playing a game, or watching a DVD would also fit into this class.
  • As a specific enhancement to the above task, it may be necessary to copy information between two documents, or follow instructions from one document whilst working on another.
  • Filing away a document that has been worked on, or finding a previously worked on document so work on it some more. This may include looking at lists of potential items to understand the relationship between them (for instance a list of messages in an e-mail coversation thread). Tools will be required to control the navigation, or search for the item.
  • As a special case of the above, to switch between a limited number of previously active tasks, maybe triggered by an event (e.g. an e-mail arrives, so you read it, reply and then switch back to whatever you were doing before)
  • Do something as a background activity, with occassional need to intervene or monitor progress (e.g play music, or download a set of files from the internet).

Actions and Tools

Selecting from a heirarchy

There is a frequent requirement to search for information that is in a heirarchy. I am a firm believer that

  • The user is much more able to find what he wants if the complete hierarchy is exposed at the begining (ie no collapsable trees). [Think about a display mechanism that allows this]
  • That this heirarchy must be related to what the user expects (for simple heirarchies) or what the user defines himself (for complex heirarchies)
  • The heirarchy is about subject areas and NOT about type or time (including events) – because, as we see below the computer should also maintain these links separately.

Standard

A number of actions will be standard amongst the tasks above. Some of them will relate to switching between tasks, whilst others will be specific to a given task (e.g. Print the document currently being worked on)

[Need to expand this further]

Task Specific

Task will need to define their own actions.

The individual and his identity to the outside world.

In todays world the computer is not just a tool for undertaking solo tasks, it is also a tool for communicating with others. But there is a sublety to this. It is no good assuming that just because I am sitting in front of my computer that

  • I want the whole world to know that I am there
  • Everyone knows me by the same identity

Each individual will hold a number of identities and will also have a list of his contacts (other individual/identity combinations) for each identity, and grouped together into classes.

He can enable his presence to be known by class of contact

Locating Information

As we have discussed above, there needs to be a standardised way of storing information so that it can be found again. The key concepts that we need to link together is

[Local]

  • Tree structured subject area – much akin to the folder hierarchy used today
  • Time linked to events (in a calender)
  • Type of information (mail message, document, image …)

[Remote]

This is a different problem and needs wider thought.

Putting it all together

The start

There will be a process, which I will not cover in in this article, of starting up the computer, connecting it to a network, and getting it to a point where a known individual is sitting at a screen, keyboard, mouse, other peripheral combination ready to start work.

The approach to task selection seems to me to be really dependent on whether you are creating something new, or locating something old.

As we have seen from information identification, new documents start with application selection.

The focussed task

The fundemental concept above is of normal focus on a single document with a set of tools with which to work on that document.

I think that the pictorial representation of that document related to the focussed task should use the full screen. No standard borders or scroll bars, or menus or toolbars, with maximum space given to the document. If any dimension of the document is smaller that the screen, then this border will be primarily black (except for transparency effects). If the document is bigger than the screen rather than scroll bars, a standard mouse drag should be used to move the document around

Tools for manipulating the document, or for switching to another task, should be as layed out in a visible window above this full screen (perhaps being slightly transparent so that were ever it covering a key aspect of the main document it could been seen). However, although it remains above the focussed window, keyboard and mouse input remains pushed towards the full screen except when obviously manipulating the controls.

[Need to consider the alternative of a panel down the side of the screen that pops out when the mouse is pushed hard to the side of the screen, but dissappears again when the mouse is moved away]

Where possible this should be like a control panel, with buttons to press or sliders (where an anologue input is required)

Software Patents are Bad for Europe

Since man first invented the wheel, society moves forward technologically by inventors standing on the shoulders of those who came before. This advance in our knowledge has improved our lives immeasurably, so much so that society wants to encourage inventiveness, by rewarding those that invent new things a monopoly in that invention (a patent) in exchange for the knowledge that future generations can build upon.

Since man first invented the wheel, society moves forward technologically by inventors standing on the shoulders of those who came before. This advance in our knowledge has improved our lives immeasurably, so much so that society wants to encourage inventiveness, by rewarding those that invent new things a monopoly in that invention (a patent) in exchange for the knowledge that future generations can build upon.

It is important to understand this clear quid quo pro between society and the inventor. Based on that understanding it is not unreasonable to expect that if the patent grant is going to generate a great deal of money for the holder, that society should expect a similar degree of inventiveness for others to build upon.

This to me is the crux of the problem with grants of patents on Business Methods or Software Functions (I use this word function here because the alternative, the actual software itself – which does take considerable time and effort to get right – is already protected by the monopoly rights of copyright. Defining functionality is comparatively a trivial exercise.). The level at which patent protection can be applied for is several thousand times simpler than would be needed in a fully functioning business, or a substantial software programme. Whilst a physical product can be broken down into smaller patentable components, the ratio is nowhere near as great.

This leads to two consequences. Firstly, and this is the most important point, is that to achieve anything useful in writing a software program, you could potentially be effected by thousands of patents on the little individual functions making up software. Secondly, these little individual functions do not provide anything like the benefit to society that would be appropriate for the monopoly position granted to them.

It is also worth asking why patent protection is being considered at all. Given that we already have copyright as a mechanism for society to give a monopoly to encourage invention why do we think that the patent approach is always needed.

It costs money to obtain a patent. This mitigates against the small organisation spending the time an effort to patent all the various small functions that go up to make any useful software program. By contrast, large global corporations can afford to invest to that level. The consequence of that is that only the large mulinationals can really protect themselves via the patent route.

Considering this from a European perspective, all of the substantive software companies with the one exception of SAP are American. Thus allowing patents of software functions in Europe can only harm Europe.

But there is a broader problem. Open source software, developed by the thousands of individual contributors around the globe are able to develop software that can – for the first time – compete against the monopoly stranglehold that Microsoft has on the industry. Unfortunately open source software has a real problem. It has been competing on technical excellence spread by word of mouth, and unlike normal commercial enterprises does not translate that technical advantage into money. This prevents it from either entering into the patent game, or from spending the cash lobbying governments to take decisions in its best interest.

Monopolies are not in the best interests of society (even more so when the monopoly is based on the other side of the world from your society). That is why we have many many mechanisms to prevent monopolies from abusing their monopoly powers. With open source we have another mechanism that can fight against these monopolies. But allowing software patents could so easily destroy that weapon.

Debugging is my pleasure

About a month ago I decided the time had come to find out why, when I attempted to blank a cd in my cd rewriter, cdrecord (the program I was using to do this) hung – and then could not be killed off because the operating system thought it had outstanding I/O in progress.

This meant getting down to a copy of linux source code, building a system with some debug statements in it and finding out what was going on.

It was a hard three weeks, but I have eventually proved that there was a hardware problem with my drive. I must say, it was one of the most satisfying activities I have undertaken recently.

About a month ago I decided the time had come to find out why, when I attempted to blank a cd in my cd rewriter, cdrecord (the program I was using to do this) hung – and then could not be killed off because the operating system thought it had outstanding I/O in progress.

Continue reading “Debugging is my pleasure”

My Open Source Philosophy

It may seem strange that someone who has made his living for over 35 years in the software business, with at least 15 of those years in charge of a product for which we charged significant licence fees, should advocate the open source movement and the supply of software for free. This paper clarifies, and then justifies, my stance on this. I hope that in doing so I may add to the debate on the subject by showing why I believe that despite the free availability of software there is still significant revenues and profit to be made by those that wish to supply solutions to the business community for money.

It may seem strange that someone who has made his living for over 35 years in the software business, with at least 15 of those years in charge of a product for which we charged significant licence fees, should advocate the open source movement and the supply of software for free. This paper clarifies, and then justifies, my stance on this. I hope that in doing so I may add to the debate on the subject by showing why I believe that despite the free availability of software there is still significant revenues and profit to be made by those that wish to supply solutions to the business community for money.

Several years ago I was a strong advocate for Microsoft. I was responsible for establishing a PC based office automation environment within my company and I saw important marketing effort advocating a move from the horrors of the DOS environment to the ease of use that the WIMP (windows, icons, mouse, point) environment provided being driven by Microsoft with a pricing strategy that beat its competitors easily. Two specific deals in particular spring to mind. The first was bundling networking within the operating system where previously we had had to purchase a separate licence (DECNET) and the second was to create an office bundle at a price not much above a single application where previously we had been purchasing these items independently. Then, as part of my role to set future direction for a product I was responsible for, I became a beta tester for windows 95 about 18 months before it was released.

So why do I now think differently? The turning point for me came when I discovered Linux and installed it on my PC at home. I had watched Windows 95 move through new releases which added a small amount of new functionality but which required me to pay for another upgrade licence, I had watched licence conditions change so that key office software at work was not available to me at home anymore, I had paid good money (not to Microsoft) for a mail/news reader that didn’t quite do what I wanted but there was no way to modify it and, most importantly, I was suffering from system crashes and there was no way to solve the problem. I realised in Linux I had found a way to provide me with free software and a mechanism for getting complex crashes fixed.

Back in the late 1980s, I had been actively involved in trying to work out how to improve the quality of code that my company produced. I was directed to, and read, Zen and the art of Motorcycle Maintenance. To me it taught an important lesson. Quality cannot be stitched on to the side of software, it has to be built in by each individual working on it. What open source does, is that by exposing the insides of software to peer review, encourages the producers of the code to consider quality as they go along and also enables code that does not match up to be seen and then improved.

Unlike some, I do not despise people who wish to make a living selling software. I do regard those who abuse their monopoly power and break the law in order to keep or improve their competitive position as reprehensible and I expect the authorities to take appropriate steps to prevent further occurance and to penalise those who do so for their behaviour. But that aside, I regard my use and support of open source a competitive issue and one in which I support the supplier who best meets my needs now and in to the future.

From this perspective there is an interesting balance here. On the one side I get a lot of functionality for free. Not just the kernel but the wealth of applications that come with it. But the downsides are also not insignificant. I cannot run many of my excellent games (although some do surprisingly well under wine), configuration of any particular feature can still be a tortuous process (particularly newer peripherals – like my usb palm pilot:-)) , fonts don’t always perform well (particularly in conjunction with printing). What ultimately drives me towards Open Source is that by giving something of my abilities (from writing software and or documentation to simply searching for bugs or critising it) I am both giving something back for the benefit I have received from the community and I am tipping the balance towards a longer term, better cheaper solution to my computing needs.