Monday, October 29, 2007

Nokia 770 backup

The ssh package for the Nokia comes with sftp for file transfer. I thought to do backups by creating a big tar file and using sftp over WiFi to get a copy to my desktop. This is not an ideal solution, as one needs to create the whole archive before starting a transfer. That means you have to have enough space to create the archive. Now, the archive could take less than half of your space because tar doesn't waste any space in the last blocks of files, and the entire archive could be compressed.

But the ssh package also comes with scp. The -r option recurses directory trees, and the -p option preserves file modes. The -q option eliminates the messages about what file it's working on, and how far it has gotten, and how long it thinks it will take to finish. I've no idea how long it really takes. Just plug it into AC, connect to the backup machine via WiFi, start the scp, and call it a night.

On the backup machine, running linux, i use my mcmp program to detect files that are, in fact, identical to other files brought over from, for example, previous backups. It can create hard links from one to another, so the backup files only take filesystem space for files that actually changed. For me, that's not alot. Mostly what i do is copy books and audio to it. Reading these things does not cause the files to change. I really should release mcmp. It's fast and reliable.

The desktop computer is also backed up. I plug an external USB hard disk in, and tell it to make a copy. When it's done, i dismount the backup drive and power it down. That way a power spike can't take out both the primary and backup. Single files can be restored, if needed.

Do you have a backup plan for your computer(s)?

Wednesday, October 10, 2007

Word as reader

Have i mentioned how terrible MS Word is as a document reader? Today? A couple starting caveats. First, i'm using Microsoft Word 2002 sp3. Second, this isn't just Microsoft bashing, as fun as that is. Many modern word processors suck as document reader applications. And yet, if someone writes a document in a proprietary format like Word, you expect that somewhere down the line, someone will use Word to read the document. It might be the author, for proof reading. It might be the prospective employer, who needs to read your resume (or is it CV now - latin seems to be making a comeback. Is that because of Harry Potter?).

So, what is it, exactly that is so awful? Is it Page Mode vs. Normal? Is it the zoom level, and having to zoom in when dealing with print resolution graphics, but then zooming out to cope with the text? Is it dealing with objects that don't fit on the screen?

No. It's the idea that the page down and page up buttons change your insertion point and any screen movement is incidental. So if you use the scroll bar, and then forget and use page down, it typically jumps back up.

I'd rant some more, it's so much fun, but there's little more to say.

Tuesday, October 09, 2007

Best Practices

A colleague had this collection of PDF documents on various aspects of building software. One, entitled High-level Best Practices in Software Configuration Management talks about how to get the most out of your source control system. It was written at Perforce Software, Inc., and though it tries to be general in nature, it reflects their product, which i've never used. It was written in 1998, and was obsolete at that time.

The paper talks about branching and code freezes and codelines and workspaces and builds and the processes that an organization must have to do to cope with a serious problem. Merges.

So, in the old days, one used a source control system, like SCCS or RCS, and of course others. In these systems, the source code cycle steps through these points:

  • check out the code with an exclusive edit lock
  • edit the code, and test it
  • check the code back in.


This works OK, as long as you have only one developer, and one workstation for that developer. But the moment this isn't true, then at some point in development, one developer will have a file locked that another developer wants to edit. So, either the second developer waits for the lock to free - perhaps by doing something else, or the second developer asks the first to add her changes too, and check them in. Or perhaps even more creative solutions are explored.

But often, the second developer is working on a version of the code that will be released at some other time than the first developer is working on. Same application, just code that won't be released for an extra month. Then, each developer needs their own set. The usual way to do this is for at least one developer to create a branch, and work there. Now, when the first developer finishes his release, the code is checked in.

The second developer has a new problem. The code she started with isn't the code that's now in production. Changes made for the first release aren't in the code set for the second release. These changes need to be merged into the new set. The key point in this Best Practices paper is Get the right person to do the merge. It's important to do this step right because it is error prone, tedious, and did i mention error prone? I've done this work. I've even been the right person for the job. I volunteered for this work because I wanted it to be done right. It wasn't that my efforts would be lost if it wasn't done right. It was that the team's progress could be lost if it wasn't done right. And no one else on the team seemed to understand the problem. And yet, the correct right person to do the merge isn't a person.

As early as 1993, i was using CVS. Here, the computer performs the merges. It's fast and reliable. If the merge detects a conflict, it notates this in the code and allows the developer to fix it. But because of this simple change, the whole source code control flow changes. Now it's like this:

  • check out the code
  • edit the code, and test it
  • update the local copy with changes from the repository
  • check the code back in (but leave this new copy out).


Now, the only time that the source code is locked is for a few seconds while the code is being checked back in. Since the code is nearly never locked, any developer can edit whatever they want whenever they want to. The merge happens during the update.

A merge conflict happens when there is a change to the same line of code. When that happens, the two versions of the code are marked in the updated file. The developer edits the file, figures out if one version, the other, or some new code is needed to resolve the conflict.

One might think that since updates are performed all the time, that developers would be constantly fixing merge conflicts. This is not the case. Generally, if two developers change the same line of code, it usually means that they are working on the same problem. If they aren't working on the same problem, there are seldom any conflicts.

So, in a multiple release system, branching is still needed. But the merging process can be automated. And it's long past time that we should need to do this by hand.

So, this is old news, right? Why rehash it now? Because vendors still sell obsolete software. For example, i work for a company now that uses Serena's Dimensions product. This product, based on PVCS, is an old style edit-with-lock and manual-merge system. It has some nifty work flow stuff layered on top, but the hard problems are still hard, and color the way any work is done. To wit, our current release is projected to be at least two weeks late because it was noticed too late that a merge would have to be made. (Sorry, Serena's site seems to be dehanced for Firefox with a pointless entry gate screen. Entry gates went out in about 1995.).

To be fair, i have no idea how Serena thinks that an issue like: Development processes involve time-consuming and unreliable manual hand-offs can be solved with their product. As near as i can tell, if you want overlapping releases, you are stuck with manual merge.

So who bought into this system? Why aren't we using CVS or SVN, which are free? Did they consult with senior developers? I wasn't consulted. These are Best Practices. It must be some definition of Best that i've not heard.

Thursday, October 04, 2007

Alcohol

Very funny xkcd.com cartoon.

Since alcohol slows reflexes, one would think that it would slow thinking in general. As programming requires some of most intense thinking anywhere, one would expect that imbibing would categorically be a bad thing. Yet, there are two pieces of evidence that i'm aware of that contradict this.

I worked with a guy who preferred working from home, over a slow modem, to working in the office. At home, he could work with a six pack of beer (brand not specified). He said it was less painful. Let's say he sees a bug. When sober, his reaction was Damn! Another bug. But after a beer or so, his was reaction was Ha ha ha, another bug, and he'd get right to it. So, apparently, psychology can be an important factor for programming.

But i also did an experiment, mostly by accident. In the mid 80's, i was at this keg party of mostly geeks. Thing about keg parties is that i'm a really cheap drunk. It just doesn't take much to get a buzz. So, i can't really count cups (they were plastic, and so can't be properly called glasses) after one. I've no idea how much i had. Enough so the Universe spun, but not enough to bring me to the ground. Anyway, this guy, whose name i don't recall, was talking to me, and i mentioned this graphing package that i'd just written. We had forty something devices capable of plotting results, and they were all different. Our plotting packages only knew about three or four of these each, so you got your output on one of those. If you wanted output on a better one, you'd start from scratch using another package. So, i invented an intermediate format, and a series of filters that could translate to and from it. This guy said he had access to a new laser printer, and had the manual for it. How long would it take to create a filter? I'd have said a couple hours. You know, just take a filter that looks like it, and modify it. He wanted to do it right away.

Now my judgement was nearly entirely impaired, so despite the time, maybe two in the morning, we started right away. We staggered to my office, and started hacking at it. I don't remember the details. But when i got in to work later in the morning, the resulting filter still worked. Further, the code wasn't bad. Over time, the bugs will eventually show themselves. But no maintenance was needed for this filter. Ever.

Windows ME really was pretty awful. But it seems more likely that alcohol was involved in the decision to release it rather than in development.