Thursday, May 03, 2007

Fields, Controls, & Context

In Suneido, a field has an associated control. For example, if the field is "category" the control might be a pull down list of categories.

For the most part this works well. The problem is that in certain contexts you want a different control or to modify the behavior of the control slightly.

For example, when you are entering new categories it doesn't make any sense to have a control that only lets you choose from existing ones. Actually, you probably want the opposite - to validate that what you enter is not an existing category (i.e. a duplicate).

These are the two main contexts - creating and using.

Once you use a category, possibly as a foreign key, you don't normally want to delete it. But business changes and you may not want to use a category any longer, in which case you don't want it showing up in lists to choose from. We would usually do this by adding an "active/inactive" flag to the categories.

However, when you go to view or modify old data, you don't want it to become invalid because it uses a category that was later marked "inactive".

One solution is to record the date when the category became inactive and so it can be valid for data prior to that date, but invalid after that date. But this means either you make the user enter the date when it became inactive (extra work for the user) or use the current date when they mark it as invalid (but that may not be the right date). The other problem is that if the record where it's used either doesn't have a date, or has several dates, then it becomes hard to apply this. (We use this solution in certain cases where it "makes sense" to users e.g. a termination date on an employee.)

The solution we normally use is to make the inactive categories valid in existing data, but only allow active categories on new data. If there is a list to choose from, it would only show the active categories, on the assumption that you're picking for new data.

This splits the "using" context into "using on existing data" and "using on new data".

But when we come to reports (or queries) there is another problem. If you are printing a report on old data, you want to be able to select currently inactive categories. But at the same time, if you pull down a list of categories on the report options you don't really want to see every category that has ever existed. Most of the time you're printing current data and you only want to choose from the active categories.

Our normal "compromise" on reports is that lists to choose from only show active categories, but the validation will allow you to type in an inactive category.

Another alternative might be to add an option (e.g. a checkbox) to the pull down lists that lets you show/hide inactive categories.

If you use an "inactive as of" date, you still have this problem. You can't use the transaction dates because you're printing a range of dates and different categories will be active for different transactions.

So we now have four contexts:
  • creating - no list to choose from, duplicates are invalid
  • using on "old" data - inactive values are valid for existing data
  • using on "new" data - only active values are valid
  • using in report selections - allow entering inactive values
"creating" requires a completely different control. The various forms of "using" can be the same control (with a list to choose from active categories), but with different options that affect what is "valid".

Currently, Suneido associates a field with a control by the field name. We normally make the "default" control one that handles the "using on old/new data" context. For creating and reports we rename the field to get different controls.

A better approach might be to make the context more explicit. You could allow associating multiple controls with a field name, based on the context. Or the control could be aware of its context and adjust its behavior accordingly. (We partially do this to handle the "using on old data" versus "using on new data" contexts.)

I'm curious how other user interface systems handle these issues. It wouldn't be hard to get a copy of e.g. QuickBooks and see how they address these issues (assuming they do). It's not something that I have seen written up in any of the user interface guides that I've read.

Monday, April 30, 2007

Embedding Google My Maps

Here's a useful tool for embedding the maps you create with Google My Maps:

http://www.dr2ooo.com/tools/maps/

Here's an example:

http://sustainableadventure.blogspot.com/2007/04/eagle-creek-paddle.html

The embedded map actually uses the dr2000 web site so I'm not sure about the long term stability. Presumably it's not too hard to do this yourself, but I haven't figured that out yet.

Wednesday, April 25, 2007

Custom Google Maps

Google Maps recently added the ability to put points and routes onto Google Maps and save the results. It's really easy to use. You can even attach photos or videos. (Note: you'll need a Google account, if you use Gmail you already have one, if not, it's free and easy to register.)

For example, here's one of my regular running routes:

Saskatoon Running Route

Of course, it can be used for a lot of other things - check out some of their featured maps.

Friday, April 20, 2007

Mac + printers

Ever since I got my Mac Mini I've been struggling with the printer issue. I have an Epson 2200 hooked up to a Windows machine. It took a fair bit of research to figure out how to connect to it from OS X, but I finally managed it. It is supposed to be easy, but it looks like a lot of people have problems. But I could only use the Gutenprint (formerly Gimp-Print) drivers which don't support all the features of the printer.

I downloaded the latest Mac OS X drivers from Epson, but I couldn't see how to use them. Finally I found out that you can't use USB drivers on a networked printer. This seems like a strange distinction - on Windows I can use the same drivers whether the printer is connected directly or networked. Maybe it works if the printer is shared from another Mac - I don't have two Mac's to try it.

I thought a network print server might do the trick but from what I could find out, I'd still have problems. It looks like Apple's new Extreme Air Port might handle it a bit better, but it still wouldn't let you run the Epson utilities (ink level, cleaning, etc.). And although they claim it's Windows compatible I wouldn't be surprised if there were issues.

In the end I physically moved the printer and connected it directly to the Mac. Rather than fight with sharing it from the Mac and somehow connecting from Windows I just went out and bought a new printer for the Windows machine. (I wanted the large format 2200 on the Mac since that's where I plan to print photo enlargements from.) I bought an Epson R260. (Epson's may or may not be the best, but I'm familiar with them.) It amazes me that a printer that has a resolution of several thousand dpi and produces 1.5 picoliter droplets costs only $120! I realize they make their money on the ink, but it's still amazing price/performance relative to a few years ago. Of course, I'd like the newer Epson R2400 to replace the 2200 but that'll have to wait.

It seems strange that Parallels and VMware can virtualize an entire computer, but for some reason OS X printer drivers are tied to hardware. I'm sure there are "good technical reasons" for this, but it seems pretty crappy to me. The Mac seems to lose to Windows on this front.

Friday, April 13, 2007

Three Stages of Design

I was listening to a podcast about software design and they were quoting Steve Jobs. So this is third or fourth hand and probably garbled from the original.

Stage One is where you are new to a domain and it seems simple and you design a simple solution. But the simplicity is really a lack of understanding so your design, while simple, is not very good.

Stage Two is where you see the complexity of a domain and you end up with a complex design. It might handle lots of things but the complexity makes it hard to learn and use.

Stage Three is where you figure out how to make a simple design that still addresses the complexity of the domain. This is the elegant solution. It doesn't have every conceivable feature, but it handles the important stuff for a majority of users. For example, the iPod. Lots of other music players have more features, but the iPod hits that sweet spot balancing simplicity and features. (100 million buyers testify to that)

Back when my company was doing custom software development for a wide variety of domains, a lot of our products were Stage One. A few progressed to Stage Two. Even now that we're focused on one domain, our product is still definitely Stage Two. Suneido, our development tool has some Stage Three aspects but doesn't really qualify overall.

Stage Three is hard. And there doesn't seem to be any formula for achieving it.

Friday, March 30, 2007

ETech 2007 Last Day

We started off with a few interesting keynotes. One on Adobe's new Apollo platform - an alternative desktop runtime for web apps (HTML, CSS, JavaScript, Flash). It looks pretty neat, especially the features for running apps when you're offline (not connected to the internet). But is HTML/CSS/JavaScript the best way to write apps? I'm not sure.

Next, Google gave a presentation on their project to add 1.6 MW of solar power at their headquarters. They also talked about other environmentally friendly practices at Google. Again, it seems Google is trying hard to not be evil despite their huge size.

After the break I went to a session by Andy Kessler on how Moore's law will soon be "invading" medicine - leading to better and cheaper health care. He was an entertaining speaker.

James Duncan's session on JavaScript and Zimki was quite interesting. He talked about some features of JavaScript that I wasn't aware of. Zimki is a JavaScript server and web app framework with some novel features. Fotongo offers paid hosting for Zimki, but it will also be released open source in the next few months.

At lunch I discovered a new coffee shop near the hotel - Brickyard. Although Starbucks is a good default, I like to find local shops especially if they have better coffee! It didn't hurt that it was another beautiful day and I could sit outside in their courtyard and enjoy the sun.

The sessions were thinning out by the afternoon. I went to one on why you should try to design your web app so it could be run as a text adventure (sounds crazy but actually made some sense) I hadn't recognized the presenter's name but he turned out to be the guy who had presented a Rails game (Unroll) at OSCon. He's an interesting character so I was glad I'd gone to this session although it ended up being quite short.

My last session was by Forest Higgs on building your own 3D printer. Commercial 3D printers still cost tens of thousands of dollars to buy and require expensive consumables. You can now build your own for a few hundred dollars. Forest has his own design but he also talked about the Reprap project. The goal is not just to make an open source 3D printer design, but also one that can replicate (most of) itself. (They use microprocessors which obviously can't be manufactured by a home machine yet!) He also talked about the implications of widespread grass roots manufacturing capabilities. Thought provoking.

And that was it for ETech 2007. Although I heard a lot of grumbling that it wasn't as good as previous years, I still think it was worthwhile. Lots of new ideas that will help fuel my brain.

I rounded off the day with supper at The Fish Market. I couldn't be bothered to wait for a table in the restaurant (long lineup) so I grabbed a table in the bar with a great view of the water. There was a limited menu in the bar but the fish and chips was the best I'd had in a long time, the waitress was cute and cheerful, and the sunset was beautiful - what more could you ask for!

Wednesday, March 28, 2007

Amazon S3 with cURL?

As I've talked about in previous posts, I've been searching for a good way to access Amazon S3 from Suneido (to use for backing up our customer's databases).

The SmugMug presentation recommended using cURL. We already use cURL for a variety of tasks (such as FTP) so we'd be happy to use this option. But S3 requires SHA-1 hashes which I didn't think cURL could do. Maybe you can calculate the hashes separately and use cURL just for the transfer. I'll have to look into this.

Day 3 at ETech 2007

Another good day at ETech. I've found a nearby Starbucks that is in a lovely courtyard and isn't very busy. This is where I start my day with a Grande Latte and a little peace before the mind-storm.

The keynotes were all good, but I really enjoyed Danah Boyd's presentation. It's hard not to appreciate someone who is so obviously passionate about their work. I also get a kick out of geeks who are brave enough to dress idiosyncratically. Personally, I can't imagine intentionally doing something to make myself stand out more! Except in my work, of course :-)

Lunch was outside in the Seaport Courtyard again. The weather was wonderfully sunny and warm (especially compared to yesterdays horrendous winds). After lunch I sat by the pond in Seaport Village for a quiet coffee before heading back for afternoon sessions. The sun was shining on the water and the ducks were entertaining. Mama duck brought her three ducklings within inches of my feet. I wished I had my camera.

The presentation on SmugMug's use of S3 was good. This year a number of the sessions have been thinly disguised marketing pitches, but this was a good, seemingly honest, first hand experience report.

Next was a combined presentation on Google's MapReduce and Hadoop - an open source implementation of MapReduce that is part of the Apache Lucene search engine project. Although map and reduce are familiar from functional programming, their application to processing large data sets with clusters of computers is pretty neat. In the past, this would have been out of reach of most of us who don't have access to clusters, but now you can "rent" as big a cluster as you want from Amazon EC2 (or you will be able to once EC2 is publicly available).

The last two sessions of the day that I attended were from Microsoft and IBM on various research projects. One of the Microsoft projects was Baku a "visual" programming environment for kids. I wonder if some of the ideas could be applied to allow end-users to do more complex "programming" in applications. One of the IBM projects was Koala - basically a macro recorder for the web with a unique slant towards sharing the resulting scripts on a wiki. Unfortuately, neither of these projects is publicly available yet.

I had supper at the Edgewater Grill in Seaport Village. The food and service were nothing great, but I enjoyed watching the sunset over the bay.

After supper I stopped in at the MAKE Fest but they only had a handful of exhibits so I didn't stay long.

Ideas from Jeff Jonas

There were some powerful ideas in Jeff Jonas's talk on analytics that I think are quite reasonable to implement in a simple form. One is the idea of "persistent queries"

Most of Jeff's work has been on identifying people - e.g. terrorists and criminals. For example, you get a tip that a criminal is flying into an airport using a certain name etc. So you query the passenger lists but you don't find anything. You're not sure when he's coming in, so you can keep querying every day or hour, but that's not really practical. Instead, you make the query "persistent" so if new data arrives that matches your query you will be notified.

The naive way to implement this is to simply run the query against incoming data. But that doesn't scale. The more persistent queries you have, the slower it will get to enter new data. You can do it as a batch process e.g. nightly - Jeff calls this trying to "boil the ocean" - but it still doesn't scale well and it also doesn't provide the results in real time.

Instead, you turn the problem around. You store the persistent queries as "data", and you treat the incoming data as "queries". So each incoming record requires one "query" regardless of the number of persistent queries. Very cool.

Obviously, there are some issues here. One is that a persistent query probably involves only a few attributes whereas the incoming data will have many attributes. So you're not doing a normal exact match. And you probably will need ways to expire queries.

You might think that this is cool but you don't need to search for terrorists. I think it can be more broadly applied. For example, lets say you're a real estate agent and someone comes in asking for a certain type of property. You do a search but you don't find anything. So you make the query persistent and a few days later you get notified of a new property that's come available. You call your client and make the sale.

Tuesday, March 27, 2007

ETech 2007

Just finished the second day (of four) of ETech.

I've heard some complaints that it's not as "good" as it used to be. The problem may be that a lot of this stuff isn't "new" anymore. When they're writing about stuff in mainstream magazines then you know it's no longer cutting edge.

I was signed up for the tutorials on Monday, but Kathy Sierra's was canceled (see her blog for more on the craziness behind this). They offered the option of upgrading to the Executive Briefing at no extra charge so I decided to do that since the alternatives to Kathy's tutorial didn't excite me. I was sorry to miss out on the other tutorial by Avi Bryant since DabbleDB seems to be such a great product. On the other hand, the briefing included a lot of speakers and topics that I enjoyed. However, a number of the briefing speakers were also included in the regular program so there was a certain amount of duplication.

Today (Tuesday) was the start of the actual conference. From experience I've learned that the best way to pick sessions is to go by the speaker rather than the topic. That doesn't work when you've never heard of the speaker, but it's a useful heuristic. For example, I'm not all that interested in TPM and DRM but I knew that Cory Doctorow would be interesting and he was. On the other hand, a session on Haml that looked really good (but I'd never heard of the speaker) was pretty mediocre. I also enjoyed Jeff Jonas in the briefing and his keynote so I made a point of attending his session, although again there was a certain amount of redundancy. And it was good to hear Jeff Hawkins, since I'm a long time fan of Palm and Handspring, and recently read his On Intelligence book. Unfortunately, I heard criticisms of his sessions from people who don't seem to see the same importance in his current work.

At lunch (great food, by the way) I sat with some guys from Sun. When I said I had a small company that developed software for the trucking industry one of them said they were surprised that someone like that would be at ETech. At first that seemed to make sense - I've never run into our competition (or other similar vertical software companies) at ETech. But then I started to question it. Isn't it just as important for small companies to be aware of emerging technology? I guess if you're really small you can't afford to go to conferences like ETech, but I don't think that's what they meant. And doesn't a lot of the emerging technology come from small companies? Maybe it's because they see ETech as an opportunity for big companies to come and see what the startups are coming up with. In any case, I think I can learn more, and more importantly, have a better chance of applying it, in my small company than they have in their big company.

So far, so good. I'll post more when I get a chance. I refuse to be one of the many people who spend the sessions with their heads down typing on their laptops. At times, the clicking of so many keyboards gets to be cumulatively loud enough to be annoying. Who are they chatting with? About what? Of course, this is from probably the only person there who didn't have a cell phone (if not two or three) and who asks the same thing about everyone on their cell phones - who are they talking to all the time? About what? I realize I'm not the most social person, but nor, supposedly, are many of these geeks. (Of course, this conference isn't all geeks.)

The trend towards Apple Mac laptops continues to grow. I would guess Apple has 60 to 70 percent of this particular market.

For pictures see Flickr/etech07

Monday, March 12, 2007

Air Canada Web Problem

Here's what I got when I entered my booking number and name into Air Canada's web site to check my booking:
java.lang.IllegalArgumentException: Empty country component in 'value' attribute in 
message:Empty country component in 'value' attribute in

stacktrace:
...
A good reminder to make sure you catch errors and give users a more friendly message. Most people probably don't want to see the stack trace :-)

PS. I managed to get it to work. The problem seems to be that I clicked on a link from an Air Canada email that took me straight to the bookings page, bypassing the page where you pick your language. Another good lesson - don't assume people will always enter your site via the "front" page. These days, a large percentage of people enter via searches that point to within the site. (In this case it was their own link so you'd think they'd handle it!)

Friday, March 09, 2007

Holiday Inn Web Annoyance

I've complained about this kind of thing before, but I continue to be surprised that someone doesn't catch (and fix) these kinds of annoyances.

I was making a booking on Holiday Inns web site and had to enter my billing address. When I submitted it I got an error saying "spaces are not allowed in zip/postal codes". I shook my head and remove the space. I get another error saying the format is invalid. I read the fine print and it says you have to enter postal codes with a hyphen. Huh? Since when do Canadian postal codes have a hyphen in them?

The "funny" part is, I bet they added that explanation because people kept getting it "wrong". I wonder if it ever occurred to them to simply accept a variety of formats? e.g. postal codes with space, hyphen, or no separator.

Even better, they could use a little JavaScript to validate fields "on the fly" so it would be marked as invalid as soon as you left the field.

Thursday, March 08, 2007

Those that can, do...

As the saying goes, "Those that can, do, those that can't, teach."

I just listened to a podcast by Jon Udell with Marty Collins the senior marketing manager with the solution architecture group. Also see Jon's blog post about it.

As I understand it, the goal of her group is to "evangelize architecture". I don't know Marty's background but she calls herself a marketing person, and she talks like one e.g. "how do I push our content". Her group consists of 18 ex-architects.

First, I should admit I have a bias against anyone who calls themselves an "architect". In the software world, if a project "fails", the "architect" rarely gets blamed. Even "real" building architects have been know to design buildings that look great, but the roof leaks. Of course, they would similarly have you blame the builders, not the architect.

Don't get me wrong, I still want good architecture, but my feeling is that good architecture comes less often from self-proclaimed "architects", and more often from good programmers. How many "architects" did Unix or Apache or Linux or TeX have?

Marty tells us that one of the reasons for their group is that architects tend to be possessive and secretive about their work. According to her, even the architects working internally within Microsoft won't (or aren't allowed) to share because of the fear of "losing competitive advantage". Or is it that their work doesn't stand up well to public scrutiny?

So the architecture evangelists aren't working architects. Hmmm... And they are led by a marketing person. Ouch. If the working architects won't share, where are they getting the material they produce? Are they just sitting around dreaming up architectures? That doesn't sound very useful to me.

One of the ideas discussed is that companies should watch for people blogging in their area and jump in and contribute comments. Great idea. Jon asked Marty to let him know when they start doing this. But apparently, first they have to get some fancy new tool that is still in beta to let them monitor and comment on blogs. To me this is very much the wrong approach. It's a perfect example of something that you can try out right away and find out if it works or not. If it works, and once you have done it for a while, then you can think about tools. Why can't each member of her group, sit down, find a blog and add a comment. Not next month, not after getting more "tools", but right now, today! I'm not sure if this is a psychological block, or a bureaucratic one.

I've always liked Jon Udell but I am still wondering about his move to go to work for Microsoft. One of his goals is to reach a broader audience and he feels that Microsoft can help with this, but I'm not so sure.

Saturday, March 03, 2007

Slow Code

One of the problems we run into with our code is that it ends up too slow.

Before I continue I want to assure you that I'm not suggesting "premature optimization". I fully agree that optimizing before it's necessary is the wrong thing to do.

What I am suggesting is that "too slow" is a bug, and like other bugs it should be avoided if possible, or at least caught as soon as possible.

One of the causes of slowness is code that is ON2 (or worse!). Nested loops are a common cause. Programmers either don't recognize the nested loops (hidden in separate functions or in recursion), or else they don't realize the dangers of them. For example, if you have 10 items and you iterate over them twice that's 20 loops. If you have one loop inside another that's 100 loops. 5 times as many loops but not a big issue. But what if you have 1000 items? Iterating twice is 2000 loops, but nested it's 1,000,000 loops or 500 times as many. If each loop takes .001 seconds then 2000 loops is 2 seconds, but 1,000,000 loops is 1000 seconds or roughly 17 minutes. In an interactive application that's a big problem.

I think one of the reasons for this problem is that programmers unconsciously relate speed to number of lines of code. So:
for each item
for each item
do stuff
can look shorter than:
for each item
do stuff
for each item
do stuff
Higher level languages, libraries, toolkits, frameworks etc. can make this even worse. Now one line of code can do a huge amount of work, but to the programmer it's just one little line of code. How bad can it be?

Programmers normally test with small amounts of data. When you're only dealing with a few items, both 2N and N2 are fast. You don't get the feedback about the slowness until much later, when the user has larger amounts of data. And programmers are inclined to ignore whining from users anyway!

Another related issue is the difference between dealing with things in memory and dealing with things in the database. The difference is huge, but again, in testing it may be barely noticeable. And, again, the actual number of lines of code may be comparable. (This can be a problem with tests. For one test it doesn't matter. For a whole test suite it can mean the difference between a suite that runs in a minute and one where you have to run it overnight.)

Unnecessarily reading "everything" from the database into memory is another common mistake. If you actually need all the data in memory at once, fine, but if you just need to find a particular record or set of records then it's an inefficient way to do it. In our Ruby on Rails project I regularly come across code that does model.Find(:all). At first glance it might seem harmless. But what happens when that table contains 100,000 records? (Sadly, part of the reason for this is that Rails doesn't seem to provide any way to iterate through a query, so the alternative to Find(:all) is to do your own SQL - an ugly choice.)

I don't have a solution to this problem, but I think we need to become more aware of it. I think it's one of the reasons why our software gets slower at the same time as our computers get faster!

The Trouble With Programming

An interesting article/interview with Bjarne Stroustrup:

http://www.research.att.com/~bs/MIT-TR-original.pdf

Sunday, February 25, 2007

The Old New Thing

I also really enjoyed The Old New Thing by Raymond Chen, a long time programmer at Microsoft. This is a collection of his writing, much of it from his blog of the same name, about Windows software development. It ranges from history, to how-to, to humor.

After I finish a book, one measure of how "interesting" I found it, is the number of sticky markers I've added. This book ended up with quite a few.

Although my impression of Windows as a baroque hodge-podge was reinforced by the book, at the same time I also gained a new sympathy for the difficulties of developing software as widely used (and abused) as Windows.

I've already applied one of the small exercises in the book (converting a bitmap to a brush) to Suneido to accomplish a task I've been wanting to do for a long time, but just didn't know a reasonable way to go about it. In the process though I struggled through a typical example of the difficulties in Windows programming. I wanted to find the size of the bitmap - something that wasn't covered in the example in the book. You'd think that would be an obvious, simple thing to want to do. Guess what API call you use? "GetObject". And, of course, it's not under the bitmap section of MSDN, because it works on other things as well as bitmaps. Nor is it linked to LoadImage, although I would think that getting information about the image would be the obvious next step after loading it. It's another one of those things that's trivial once you know it, but next to impossible if you don't. What did we do before Google?

I'm pretty sure I'll be able to apply several other things I learned from this book - definitely a worthwhile book if you're a Windows programmer.

Dreaming in Code

I recently read Scott Rosenberg's book Dreaming in Code - about the development of Chandler, the Open Source Applications Foundation (OSAF) personal information manager. I'd recommend it - it's interesting and well written. For anyone dreaming of creating software, it's a scary story.

If you haven't heard of Chandler, don't feel bad. Despite the project being started in 2001 it has yet to release an actual product. The web site says they're "getting close" to a "preview release". Coming from a small business background it's hard to comprehend how anyone can go that long and spend that much money without actually producing anything for people to use.

One lesson I think the book illustrates is that constraints are good. Given virtually unlimited time and money what will you produce? Judging by this story, probably nothing.

Drag and Drop to Firefox

This blog post - Upload Files in a Browser Using Drag and Drop - describes a great add-on for Firefox - that lets you drag and drop files to your browser instead of either typing the complete file path, or browsing to it. You can even drag and drop multiple files at a time.

Even my wife complains about the awkwardness of attaching files to Gmail messages. I've suggested zipping up multiple files so she only has to attach one zip file, but to her, that's just adding complexity. I think she'll be happy with this new trick. (Once I get around to installing it on her computer and explaining how to use it.)

Bad Code

Soon after writing my Good Code post I ran into some code that really sucked. It was hard to understand and hard to modify. The sad part was that it met most of my "good code" criteria - it had reasonable names, little duplication, and small methods. You could criticize some of the coupling, but that wasn't the cause of its "ugliness". (In the process of refactoring it I found several bugs, which demonstrates that "good code" is not just an aesthetic judgment, but really does affect the quality of software products.)

Knowing some of the history of the code, how it had grown, I could see why it ended up like it did. You could blame some of the problems on insufficient refactoring as it was worked on. The programmers working on it had just been concerned with getting their task accomplished. If they noticed how ugly it had become, they probably felt they didn't have time to do anything about it.

The lesson is that the criteria in Good Code are necessary, but not sufficient. You need to do them, but you also need to think about whether the end result is understandable.

Sunday, February 18, 2007

Monster Palm

I recently ran across Bruce Tognazzini's web site Ask Tog. (I'd previously read his book Tog on Interface - recommended.)

One of his articles was Make Your PalmOne a Monster Machine

My list would include:

TextPlus - Suggests words and phrases as you enter letters. This makes entering text on the Palm hugely more efficient. I'd have a hard time living without this now. The ability to add your own words and phrases makes it even better.

Wikipedia for Palm (and other platforms) - I have the 2 gb version on an SD card. It's great traveling to be able to look up information about where you're going. Or to scratch a curiosity itch when you're away from the Internet.

Noah Pro - a dictionary with 122,000 words. Again, great when traveling and away from the Internet. I always hate it when I come across a word that I don't understand, or aren't sure of the exact meaning.

Bonsai - an outliner for Palm and Windows. I use this to keep to-do lists, shopping lists, ideas, packing lists, etc.

Aigo - a Go game for Palm. Considering the limited capabilities of the platform it plays amazingly well. It probably won't be good enough for a real Go player, but for someone who just fools around like me, it's great. This is the only game I play at all, and I don't play it much. I had a chess game on my Palm for a while, but never used it.

I've also been checking out the software Tog recommends in his Make Your Mac a Monster Machine article.

Surprisingly, I couldn't find an RSS feed for his web site. Tog is falling behind the times!

My Mouse Has Two Buttons!

I'm almost embarrassed to tell this story! Again, Mac users will be shaking their heads.

Since I got my new Mac, I've been cursing the lack of a "right-click". I've gotten really accustomed to using this on Windows. It was especially bad when I was running Windows under Parallels. I kept thinking that Apple really needed to drop their old obsession with one-button mice.

I tried using the configuration where holding down the mouse button is eventually treated as a right click. But if the delay was too short I kept getting it by mistake, and if it was too long I got impatient. I could use CTRL+SHIFT+click but that seemed pretty awkward.

I knew my "Mighty Mouse" (who picks these names?) had an "extra" button if you pressed on the little scroll-ball since I annoyingly kept triggering it by mistake and bringing up the dashboard widgets. I wondered if I could re-configure this to be a control/right click.

When I went into the mouse settings, lo and behold there was a setting for clicking on the right hand side of the mouse! My Mighty Mouse does have left and right buttons! That's the problem with overly slick, seamless designs - there are no "affordances" to let you discover their capabilities.

I pull down the choices for the right button but there is no choice for CTRL+click. Strange. But there is a choice for "secondary button" - I wonder what that does? I try it and it seems to do the trick. And it works in Windows under Parallels. Problem solved!

This was a good lesson. It's so easy to make fun of stupid users. We're always amazed when we observe someone using our software in such awkward ways - why don't they do it the "right" way. Or getting complaints from customers about missing features when it's right there in front of them! Obviously, if I had looked at the specs or looked more carefully at the settings I would have solved this a lot sooner. But if I can fall into this trap, with umpteen years of experience in the business, it's hard to blame someone who has little or no experience.

Thursday, February 15, 2007

Mac OS X with Windows Printer

I struggled on this one for a while. My printer (Epson 2200) is hooked up to my main Windows box and shared so my wife and I can print to it from our Windows laptops via wireless.

I could find and add the printer in the Mac printer setup but it wouldn't work. I was getting NT_STATUS_ACCESS_DENIED.

Searching the web found many people with the same problem. I eventually got it working by using the Advanced printer setup (a little tricky to find) and using a URI of:

smb://andrew:@mckinlayhome/epson2200

Judging by what I found on the web this has been an ongoing problem with OS X. I wonder why it hasn't been fixed?

Funnily, my copy of Windows running under Parallels on the Mac had no problem connecting to the printer.

Monday, February 12, 2007

Rogers.com won't take no for an answer!

This is classic - they give you a checkbox to say whether you want email, but they won't let you save unless you say "yes"!


So I changed my email address to nospam@really.com :-)

Sunday, February 11, 2007

Good Code

After 30 plus years of programming, the basics of good code has boiled down in my mind to just a few things:
  • Good names
  • Consistency
  • No duplication
  • Small pieces
  • Loosely coupled
These are roughly ordered by scale - from individual "words", to lines, to modules.

None of these are original. They have all been endlessly discussed, analyzed, and written about.

Good names

Why is "i = j * 2 + k" bad? Because you have no idea what it means without studying a much bigger context of code to try to figure out what i, j, k, and 2 mean.

It isn't just that you should have long names, they also need to be accurate. A name that was once accurate but has since drifted into being use other ways is just as bad as a comment that gets out of sync with the code.

Not all good names need to be long. There's nothing wrong with using "i" for a loop index, where it's obvious what it is.

Good names often eliminate (or at least greatly reduce) the need for comments.

Consistency

Consistency ties in with good names. Use the same name consistently for the same purpose. Conversely, do not use the same name for different purposes!

Consistency includes:
  • Coding standards or guidelines
  • Coding idioms
  • Organization, design, architecture
If you can look at a few lines of code and tell who wrote it, then you probably don't have a high level of consistency.

Code reviews, pair programming, and joint ownership of code all help with consistency.

No duplication

Programming by "copy and paste" is easy and quick. It's code re-use - isn't that supposed to be good?

The problem is that code isn't "write-only". You're going to need to fix it, improve it, add features to it. And that becomes much harder when you've got the code in multiple places.

Another problem is that it usually isn't just "copy and paste", it's "copy, paste, modify". So now you not only have multiple copies of the code, but they're each slightly different. Even tougher to work with. And it lowers your consistency.

But there are subtle advantages to DRY (Don't Repeat Yourself) that are just as important as these obvious issues. To avoid duplication, especially when the duplication has variations, you have to think about the code, what is common, what varies, why it varies. Does it vary just because you used different variable names? Maybe you should be consistent. Does it vary because it behaves differently? Maybe whoever uses this code (end user or another programmer) would benefit if it behaved consistently.

Small pieces

In some ways this is both one of the easiest and the toughest guidelines.

It's easy because anyone can see whether you've broken your code up into bite size chunks. You can write automated tools to check function / method / class sizes.

It's hard because it's not just breaking it up, it's how you break it up. You want the pieces to have high cohesion and low coupling, and that's hard. We've been struggling with this one for a long time e.g. Yourdon & Constantine covered this in depth in Structured Design in 1979!

A big benefit of small pieces comes from giving the pieces good names.

Loosely coupled

If every line of code connected to every other line of code then complexity would go up as the square of the number of lines of code. N squared goes up fast. If this was the case we'd never manage more than tiny programs.

On the other hand if every line was totally independent, then complexity would only go up linearly with the number of lines of code. That's a lot more manageable.

The reality is somewhere between. The bigger a program gets, the more important it is to keep the pieces as independent, loosely coupled, as possible. The higher your coupling is, the bigger the risk that a change will have unexpected side effects (also known as bugs).

Again, this is easier said than done. Structured programming aimed to reduce coupling. One of the big goals/advantages of object-oriented programming is looser coupling. "Interfaces" are another tool. Our tools are getting better. But are programs are also getting bigger.

Thursday, February 08, 2007

Working at Home

Lately I've started working at home some of the time, nominally Tues. and Thurs. - so I'm still around the office at the beginning, middle, and end of the week.

As our business has grown my days have grown more and more interrupted and fragmented. I seem to go from talking to one person to helping another, to supervising another. I could close my office door, but that wouldn't stop people from knocking on it or cornering me when I went for coffee or to the bathroom. And I don't really like the idea of closing my door anyway.

I could tell people to get lost, but the truth is I still want to deal with all these things. I'm not ready to step back totally, even if somehow I could. I still want to guide and help people. I just don't want to do it 100% of the time. Or more accurately, I still want to spend some of my time programming.

I want to spend time programming because that's what I enjoy doing. But I think it's probably good for the company as well. There are things that I can work on that will be beneficial, that would never get done otherwise, either because they're things like Suneido C++ internals that no one else works on, or because they're general enhancements that will never be "urgent".

I'm finding my programming time at home surprisingly enjoyable. I guess it shouldn't be surprising - I wouldn't have been programming for 30 years if I didn't enjoy it. But it's been a long time since I was able to focus on it for significant periods of time. On top of the pleasure of coding, it's also nice to feel like I'm getting things done. You don't get that as much when you're managing. In the big picture you're still getting things done, but it's not as personal.

For a lot of people, there would probably be more distractions at home than at work, but I haven't found that. I don't sleep in or watch tv or read a book. I actually want to be programming. And because of that, I'm less likely to be sucked into email or surfing the web. At work, when I know I'm liable to be interrupted any minute, it seems like I might as well kill time, since it's pointless to start anything more serious.

I'm not sure what my staff thinks about it. I wonder if some of them might resent it or feel like I'm "abandoning" them. Hopefully they'll be ok with it. They're certainly capable of operating without me.

An added (intended) benefit is I've been getting on the treadmill first thing in the morning. That way I get my exercise for the day, and I only have to shower once :-) And since I don't have to travel to the office, I still start work at more or less the same time.

Funnily, I've still been going in to the office on the weekends, since it's quiet then.

Here's my new Ikea desk, Mac mini, JBL Spot speakers, and Starbucks coffee mug - all the essentials :-)

Saturday, January 27, 2007

Where is the User Interface Innovation?

It seems like a good user interface toolkit would lead to better interfaces, and I think at one level it does. But recently it has occurred to me that it also tends to reduce innovation. Where I'm seeing the most innovation in user interfaces is on the web with the new generation of Ajax web applications. HTML, CSS, and Javascript provide a pretty basic UI toolkit, certainly not as nice as the Mac or Windows. But it seems to be another example of constraints leading to good things. It seems like, because of its limitations, the web has spawned a rich variety of UI innovations. Of course, the variety means that there's not as much standardization, but so far that hasn't seemed like a big drawback. You have the same problems figuring out how to use an innovative Windows or Mac program.

For a good example, have a look at Dabble DB - the 7 minute video gives a great introduction.

Friday, January 19, 2007

My First Mac

A few days ago I received a Mac Mini I ordered directly from Apple Canada. I could have bought one locally but I was curious about how they're mail order would work.

Although the advertised prices make the mini look relatively cheap, by the time you upgrade the memory and hard disk to reasonable amounts, and add a mouse and keyboard it's ended up almost twice as expensive as a comparable PC.

My first chuckle came when the mouse and keyboard arrived in a box about six times the size of the box the computer came in!

Less of a chuckle when I found that I couldn't use the wireless keyboard and mouse until they were set up - which you couldn't do without a wired usb keyboard and mouse. This wasn't a problem for me since I had spares, but if this was your only computer (or if you only had an older PS2 keyboard/mouse) you'd be pretty frustrated. It might be nice if Apple's web ordering system warned you about this when you ordered a mini with a wireless mouse and/or keyboard.

Other than that, set up was quick and easy. It recognized my Samsung SyncMaster 215TW and set the right resolution automatically - something Windows handles but Linux never seems to be able to.

Of course, the first thing it did was start downloading updates - a good thing, I guess, but not something I like being bothered about. I was sad to see that the updates required a restart. I guess it's not just Windows anymore that wants to restart all the time. I thought maybe the first update was an exception, but the next one (a day later) also required a restart. Updates on Linux don't seem to require this unless there's a kernel change.

For the most part it connected to our Windows network with no problems. It took me a few minutes to figure out how to access network drives and printers but not too bad.

Next, I downloaded some software - Firefox, TextMate, and Parallels. This is where I really felt my unfamiliarity. On Windows, installing software is generally pretty easy - you run the installer and it does its thing. On the Mac I found it quite confusing. You download a ".dmg" file (disk image?) When you open that you see some files, and an icon appears on the desk top (the "disk"?). Now what? Firefox comes up with some graphic which I think is telling you to drag the program to the Application folder. It might help if there was a text explanation as well as the cryptic graphic. Of course, I made the mistake of running programs from the disk images which gave me grief because they were then running (even after I closed the window - another Mac / Windows difference). Experienced Mac users are probably laughing at my newbie confusion, but it's far from clear or consistent.

Eventually I got my programs installed. I haven't played with TextMate yet, but I've heard a lot of good things about it so I'm keen to give it a spin.

One of the first things I did was install Windows XP with Parallels. I could have used Bootcamp to dual boot Windows but having to reboot to run different software is pretty ugly. Parallels did a great job of installing Windows with no interaction (except for putting in the second cd).

It seems ironic that virtually the first thing I do with my Mac is install Windows! But since our business is all Windows software, it's a crucial feature.

I had downloaded the new Parallels release candidate since I liked the sound of the new "coherence" mode. It's awesome. I have my Windows task bar (set to auto hide) at the bottom of the screen, my Mac menu at the top, and my dock (also set to auto hide) on the left. Suneido seems to run fine. Our test suite runs as fast (if not a bit faster) than the brand new dual core PC I bought recently. It's pretty impressive to have OS X and Windows programs sharing the screen seamlessly.

I'm still having a little trouble getting used to the Mac mouse and keyboard. I keep trying to right click on things! And I keep trying to use Ctrl+C for copy instead of Apple+C. At least with Parallels Windows programs seem to accept Apple+C. And Home and End scroll to the top and bottom instead of going to the beginning or end of the line. Apple+right arrow seems to be end of line, but Apple+left arrow doesn't seem to work, at least in Firefox. Some of this is probably configurable.

Now I just need to install Ubuntu in Parallels and I'll have OS X, Windows, and Linux all in one wonderfully compact system.

Disclaimer - calling this my first Mac is stretching it a bit. I've worked with everything from Apple II's to Lisa's to Mac's, but I've never actually owned one for myself.

Tuesday, January 16, 2007

Amazon S3 Tools continued

Just to confuse things even more, I tried #Sh3ll from home on a big file and it worked fine (even over wireless).

So that would seem to indicate that the problem is with my network connection at work. Although that still doesn't explain why jSh3ll worked.

However, I didn't get around to trying the others and I only did the one test at home. So it could have just been lucky.

I'm not sure where that leaves me. Maybe we should try #Sh3ll (because it appears to be the easiest to deploy) on a few of our clients and see what happens.

Monday, January 15, 2007

Amazon S3 Tools

We ftp our client's database's back to our office nightly. This gives them an off site backup and it gives us quick access to their data to help support them. The problem is that as our client base has grown we have run into bandwidth limits at our office.

We could use a hosted ftp site, but I thought it might be worth trying Amazon's Simple Storage Service (S3).

The catch is that authentication is done with HMAC-SHA1 which we've never implemented for Suneido. So instead of writing our own interface to S3 I figured we could simply call someone else's command line tool.

There are quite a few available tools for S3. Here are some that I tried:

s3cmd - Ruby

#Sh3ll - C#

jSh3ll - Java

s3tools - Python

s3curl - Perl

I wanted something that would be easy to deploy on our clients, preferably without installing a language run-time. The best for this appeared to be #Sh3ll which was an exe that only required .Net 1.1 which should be installed on most recent Windows machines. However, there are tools to turn Perl, Python, or Ruby programs into standalone executables so that might be a deployment solution.

But the only one I could get to work consistently for large files was jSh3ll which requires Java 5. All the other tools give timeout errors. I'm not sure what jSh3ll is doing different, or whether Java just has better network support.

The question is whether it's practical to require the Java runtime to be installed on our customers servers. It doesn't seem unreasonable to me, but some people are touchy about it.

PS. There is a nice Firefox extension for S3

Saturday, January 13, 2007

eTrux

After reading The Innovator's Dilemma by Clayton Christenson some time ago, I decided to start a pilot project to make web-based trucking software. (Currently we develop and sell Windows desktop trucking software.) I'd also been hearing a lot about Ruby on Rails so it seemed like a good chance to try it out.

eTrux.com is the result so far. There's not a lot there yet, but it's coming along.

Unlike other people's "too good to be true" stories, we weren't able to produce this over a weekend. Perhaps if I or the programmer I hired had more (any) experience with Ruby or Rails (or even web application development) it might have gone faster. But it's been a slow learning process (many months) to get to where we are now. I think/hope that it will go quicker now that we have figured out the basics.

Have a look. Any feedback is welcome.

Thursday, January 11, 2007

Windows XP Annoyance

Ever since I upgraded my Windows machine it has insisted on weird paper sizes. Google Picasa showed crop sizes in cm and printing showed choices like A3 and A4 instead of Letter and Legal.

Eventually I tracked it down to choosing "English (Canada)" at some point in the Windows install. This shows up in the Control Panel > Regional and Language Options

Where should I start to complain about this?
  • When you choose "English (Canada)" during setup I'm pretty sure it doesn't give you any information or warning about what effect this has. I have a hazy memory of the question being phrased as "What is your language?" (nothing about units of measure or paper sizes)
  • Canada does use metric, so I can sort of see where the crop sizes in Picasa come from, but actually we still use inches for photo sizes e.g. 4 x 6, 5 x 7, 8 x 10
  • Canada does not use A3, A4, etc. paper sizes, we use Letter, Legal, Tabloid etc. When you go to the store to buy paper, that's what you find.
  • The Customize for the Regional Options doesn't even have a choice for centimeters versus inches or A3/A4 versus letter/legal.
The "solution" is to pick "English (United States)" even though I'm pretty sure I'm really in Canada.

Strangely, I don't recall having this problem before. Either something has changed or for some reason (it was the default?) I always made the right choice previously.

I realize many Americans think Canada is some obscure little outpost somewhere in the far north, but surely someone in Microsoft's hordes of developers and testers and QA people should know better! If nothing else, check officedepot.ca and see what paper sizes they sell!

Web Annoyance

Here's a good one - Holiday Inn's Priority Club web site requires a dash in Canadian postal codes e.g. S7N-2X8.

Why would this programmer decide to dream up their own unique postal code format? The only think I can think of is they didn't want to handle spaces, but that's pretty feeble.

There are standards for this kind of thing, why not use them?

Tuesday, December 12, 2006

Too Many Bugs!

Recently, I realized that we had too many outstanding bugs in our main application. And when I started pushing to work on this, I found there were a lot more than I had realized.

In our defense, many of these bugs are obscure and/or one-time occurrences that we can't recreate. Or they're things that are only bugs in the sense that they don't follow our standard ways of doing things. But thinking that it's ok to have bugs is a slippery slope - soon you're ignoring bigger and bigger problems. It's the same reason why it's important to have your automated unit tests always 100% succeeding. (which we do)

In theory, our "rule" has always been to fix bugs before writing new code. But there's also a lot of pressure to get new features written. And it's easy to let that pressure override the bugs first rule, especially when the bugs don't seem that "critical".

For now, we've switched our priorities to cleaning up the outstanding bugs. That means new feature development has slowed dramatically. We're still trying to do urgent changes for customers but that's about it.

In theory, fixing bugs first should be self-adjusting. If you're spending all your time fixing bugs, you're not writing new code and therefore not creating new bugs. Of course, that assumes that when you fix bugs you aren't (or at least aren't always) introducing new bugs.

But even if it does self-adjust, it's deeply unsatisfying to be spending such a large percentage of our development time fixing bugs. The real question is how to reduce the number of bugs. A common reaction is "we'll have to be more careful". But people will always make mistakes. All you can do is have a process that catches the mistakes. Of course, there are things you can do to prevent some mistakes - automate manual processes, modularize your code to reduce unexpected interactions, etc.

In general, the sooner you catch a mistake the better. That's one of the reasons for pair programming - so the second person can catch mistakes right away. It's also one of the reasons for writing tests alongside code. A mistake that's caught sooner is usually easier to fix. It also provides more meaningful feedback. A mistake someone finds months after the code was written is unlikely to provide feedback that will improve your coding. (This is also one of the reasons we're moving towards continuous deployment - daily releases instead of quarterly.)

We already pair program and write tests (although we could do better on the tests). We've also started having a third programmer (in addition to the original pair) review and manually test changes, the same day the work is done.

One of the best ways to reduce bugs is to write good code. On the scale of lines of code and individual methods I think our code is pretty good. But our application is getting bigger and more complex and one of the major "causes" of bugs is complexity. Where I think we've fallen down is in the larger scale organization and architecture. When a program is small it doesn't need an elaborate architecture. (Arguably shouldn't have an elaborate architecture.) But if you just keep adding to an application, eventually it reaches the size where it does need better large scale organization. But in the routine of day to day programming, how do you recognize/decide when to work on this?

Extreme Programming recommends "incremental design".
Invest in the design of the system every day. Strive to make the design of the system an excellent fit for the needs of the system that day. When your understanding of the best possible design leaps forward, work gradually bug persistently to bring the design back into alignment with your understanding.
-- Kent Beck, Extreme Programming Explained 2nd Ed.
Unfortunately, easier said than done. A person can only think of so many things at once. If you're fixing a particular bug or adding a specific feature, it's hard, if not impossible, to also be thinking about the overall design. You might refactor on a small scale, but you're unlikely to dream up a new organization for the whole application. And even if you did, you'd still have to communicate it to the whole team and somehow get everyone to work towards it. Maybe some teams can manage this, but it wouldn't be easy.

The best "solution" I've come up with so far is to enhance our framework so it provides some larger scale organization, and then to gradually migrate the code into the new facilities of the framework. Some of the things we need to do are things we should have known to do all along - isolating database access, keeping application code out of the user interface, separating data retrieval from formatting in reports, etc.

Disclaimer: None of this is especially new or original. It's more in the nature of thinking out loud, trying to figure out how to apply known ideas to our situation.

Tuesday, October 31, 2006

Graphics Software

If you're interested in digital photography software you might want to take a look at the free beta of Adobe Lightroom. I haven't spent a lot of time with it but I really like the simple consistent layout.

I've also been using Adobe Photoshop Elements. At less than $100 it's good value. I originally got a copy included with something else I bought. That version (2) wasn't all that powerful, but each new version (5 was just released) has been more capable. I also picked up a copy of Photoshop Elements 5 - The Missing Manual after listening to a podcast interview with the author, Barbara Brundage.

I'm also a long time user of Canvas. I think it's too bad this program hasn't been more successful. It combines both vector and bitmap editing along with desktop publishing type layout. You might think putting all that in one package would make a mess, but they fit it together remarkably well. I highly recommend it.

Of course, as an open source fan I should be using Gimp. One of these days I'll take time to learn it!

Thursday, October 26, 2006

LibraryThing

I've been playing with LibraryThing a bit. It's a web app for cataloging your books. It seems pretty good.

I had written a small library application with Suneido but it was a lot of working entering books. So I decided I should be scanning the barcodes and looking them up. Buying a scanner was tough because there are too many kinds and I didn't know what would be best. I ended up buying an AS8155 from Custom Sensors Inc. It's a CCD scanner with a USB interface. I have no idea if this was the right choice but so far it's been working well.

I started to add barcode lookup to my Suneido application, using isbndb.com and got it more or less working. But then I thought it would be nice to have cover images from Amazon. So I started looking at Amazon's web services.

Then I realized that, interesting as it might be, this wasn't actually getting my books cataloged! So I thought I should look for an existing application, which led me to LibraryThing. There are quite a few choices in this area but it looked reasonable.

You can sign up for free and enter up to 200 books. I set up an account for work and scanned a bunch of books that happened to be handy (some recent, some not). You can see the results at: http://www.librarything.com/catalog.php?view=apmckinlay (at least I think you'll be able to see them since I made it public).

The barcode lookup works 95% of the time. I had a few older books with no barcode, a few books I bought in Asia it didn't find, and a few I had to use amazon.co.uk to find. And a few novels at home where the UPC code on the barcode instead of the ISBN (confused me for a bit since they print the ISBN number as well).

But even when the barcode lookup fails it's pretty good at finding books from title and/or author, even partial.

So far the only complaint is that I can't figure out how to search for C++. It keeps wanting to strip out the "++", even if I enter it in quotes (it seems to take the quotes literally).

I like the ability to "tag" the books (although I haven't used it yet). From del.icio.us and gmail I've gotten to like tagging stuff.

LibraryThing has other features that I may use - like recommendations - and some that I probably won't - like social stuff. But so far it looks good. If you're looking for this kind of thing, I'd recommend it.

For a discussion of alternatives see this discussion from Joel Spolsky's blog

Wednesday, October 18, 2006

iTunes + iPod Annoyances

I started up iTunes to copy some new music to my 30gb Video iPod. It told me an update to iTunes 7.0.1 was available. I downloaded and installed it. So far no problems. Then it told me an update to iPod 1.2 was available. I said OK and it displayed a message box saying it was updating. The problem is it never finished so I killed iTunes (the only way out, as far as I could tell). Maybe I didn't wait long enough but it didn't seem like it should take that long.

I restart iTunes but now it doesn't recognize my iPod at all!!! Now what? I google for relevant information. It looks like other people have had the problem but there doesn't seem to be a good solution. Finally Google points me to a post in an Apple RSS feed that says you have to have Terminal Services running in order for iTunes to recognize your iPod. (I had to subscribe to the feed and dig through old posts to find it.) Sure enough I had Terminal Services disabled via msconfig. (In a vain attempt to reduce the crap that's running all the time.)

When iTunes recognized my iPod it again offered to update. I said no and finally managed to achieve my original goal - to get some new music onto my iPod.

Why does iTunes need Terminal Services? And if if needs it, why doesn't it check for it and tell you what the problem is instead of silently ignoring your iPod? And why doesn't Apple have this information posted more prominently?

iPod + iTunes is supposed to be for everyone. You wonder how the non-techie is supposed to cope with these kinds of problems. Of course, a non-techie wouldn't have terminal services disabled. Unless their techie friend had done it for them...

I see the new version of iTunes will get cover images for you, finally. But you have to have an iTunes account! So if I don't want an iTunes account I still have to get covers manually!

PS. On the positive side, next time I tried the iPod update it worked.

Monday, August 28, 2006

Cool gadgets += Chumby

Chumby Industries

I want one! (or two or three)

It would make a good build/test monitor. Or an electronic picture display for my mother. Or any number of other things.

Friday, August 25, 2006

Internet Explorer in Firefox !

Check out this Firefox add on that lets you use IE to access Firefox unfriendly sites within a tab in Firefox!

https://addons.mozilla.org/firefox/1419/

Friday, August 11, 2006

Keeping Up

I continue to be disappointed with how little most technical people do to "keep up". To me this is a profession that changes fast and you need to work to keep your skills and knowledge current.

(Actually, that's not totally true. I'm just addicted to learning (and hopefully applying) new stuff and I can't understand why other people don't feel the same!)

I understand that people are busy at work and have lives outside of work. But it's like not having time to service your car - sooner or later it's going to get you in trouble (or cost you money).

Here are my suggestions for keeping up:
  • read technical books (e.g. one per month)
  • read technical magazines (even the ads can tell you things)
  • subscribe to a few blogs (but not 100!)
  • listen to technical podcasts (e.g. during commutes or walks)
  • work on a personal project (something you choose)
  • write your own blog (writing is an important skill that requires practice)
  • learn new software (language, tool, framework, os)
  • read code (something different than you work on all day)
When I interview programmers I often ask whether they've read any computer books or magazines lately. Almost always the answer is "no - I do my reading on the web". The web is a great resource, but surfing the web is no replacement for reading a good book. (I'm not talking about C++ for Dummies.) Magazine type material may be available on the web, but will you really find it and read it the same way you would a magazine?

Also, I think it's important not to have too narrow a focus. Regardless of what you actually work on, you can get valuable ideas from other fields. If you develop in Windows, take a look at Linux. If you use J2EE, take a look at Rails. Learn from biology and sociology. Have you read Godel Escher Bach?

Try to get a mix of specific technologies (like .Net) and general concepts (like Refactoring) Unless you have a specific need for it for your job, my personal opinion would be not to bother with "certifications".

Keeping up doesn't always mean the bleeding edge either. You can learn a lot from the "classics" like Mythical Man Month or Structure and Interpretation of Computer Programs.

Books can be expensive, but if your employer has any sense they'll be happy to pay.

Personal projects are a little different from reading, but equally important. After all, in the end, it's not what you know that counts, it's what you do with it. Google encourages their engineers to spend 20% of their time (1 day a week) on their own projects. Whether or not your employer does this, working on your own projects is a great way to learn. And maybe even produce something useful! (A lot of Google's products started out as personal projects.)

You are not the user

I recently read Ambient Findability. Despite the uninformative title, I found it quite an interesting book. One of the items I like was this: (summarized)

You are not the user.
A programmer designing a system is usually very different from the end user.
The experience is the brand.
The "brand" isn't just the pretty logo, it's the user's experience with the product.
You can't control the experience.
Whether you like it or not, the user is in control. You can't predict (let alone control) what they'll do.
Of course, we all "know" these things, but it doesn't hurt to be reminded!

Thursday, August 10, 2006

A Web Annoyance

One thing that really annoys me for some reason is web sites that insist (require) that you enter credit card numbers without spaces! Even O'Reilly, who pride themselves on their web site, do this.

Doesn't anyone consider that it's much easier to read and verify a 16 digit number when it's broken into groups of digits by spaces? And that maybe they do that on the card for a reason?

How hard can it be for the order processing software to strip out spaces? I've got to think that's a one liner in any mainstream language.

Considering they're trying to get our money, you'd think they'd want to make it as easy as possible! It's yet another example of why such a high percentage of on-line orders are abandoned.

I understand ending up with usability issues. I'm sure my company's software has lots. But this seems like such a basic thing and so easy to fix that I can't understand why it's so common.

Saturday, July 29, 2006

OSCON 2006 Day 5

The best part of this morning's keynotes was a hilarious talk by Damian Conway. He had everyone laughing till they cried - making fun of Microsoft, Google, Apple and others.

Things starting to wind down, not quite as many sessions to choose from this morning (only 10 instead of 15!) The first one I went to was 10 Tools Developers Need Today by Karl Fogel, formerly with CollabNet, now with Google. (Google's hiring of all the best talent was a noticeable trend.) He described one tool that analyzes Subversion logs to find major contributors. Apart from that he mostly talked about tools that would be nice to have. There were a few interesting ideas.

Next I went to Highly Technical Management of Software Projects by Alex Martelli, also with Google. One of the theories that's commonly voiced is that programmers don't make good managers. This talk argued that programmers can make good leaders of teams. Alex was a good speaker and a lot of what he said resonated with me because the style he described had similiarities with how I work with my programmers. For example, he recommended getting down in the trenches and pitching in when needed.

The final keynote was from Eben Moglen, a lawyer heavily involved in open source issues, especially licensing. He is working with Stallmen on GPL 3. He was a good speaker (although he sounded like a lawyer).

BTW For pictures from OSCON see: http://flickr.com/search/?q=oscon2006

And that's it for OSCON. Saturday I fly home to Saskatoon. Monday I'm back at work and Tuesday I have a new programmer starting. I'm looking forward to getting back to some actual programming instead of just hearing people talk about it!

Thursday, July 27, 2006

OSCON 2006 Day 4

The keynotes were better this morning, especially Robert Lefkowitz (always entertaining) and Stephen O'Grady (of RedMonk).

First session of the day for me was How Database Engines Work by D. Richard Hipp who developed SQLite. I'd hoped for some ideas to improve Suneido's database, but it was pretty basic. Actually, one of the more interesting aspects of the session was that there were developers from MySQL and PostgreSQL in the audience who contributed. In fact, the only idea for an improvement came from a question from the audience. It sounds like SQLite's query optimization is fairly basic, Suneido seemed to compare quite well - it does everything that was talked about. It did make me think that I should probably document the query optimization that Suneido does.

Next I went to Building Successful Commercial Open Source Projects by Jorg Janke of Compiere - an open source ERP system. Like my company's vertical software, Compiere has integrated accounting. Also like our software, they support only a single version, requiring users to update if they want fixes and improvements. Unlike Compiere, we haven't open sourced our software (only Suneido, the underlying development tools). I had to laugh/groan when he recommended staying away from risky, regulated areas like payroll. (We have payroll, since our customers want it, but it is scary and a hassle.)

After lunch I went to Building Domain-specific Languages in Ruby by Neal Ford of Thoughtworks. I really admire Martin Fowler's work and he's with Thoughworks so I decided to check out this session. It was ok, but nothing really new to me. One nice thing about using Ruby for DSL's is that it doesn't require parenthesis around function call arguments. e.g. you can write "print 123" instead of having to write "print(123)". I wonder how hard it would be to allow this in Suneido? We normally do DSL's in Suneido as object constants or class constants but the drawback to this is that it doesn't allow variables or control structures. Ruby's approach of using executable expressions is more flexible in this respect.

My next choice was User Experience, Pain Free by Amy Hoy, a pretty good talk. One of her recommendations, which I hadn't heard before, was to avoid saying "this will let the user do ..." and instead say "this will help the user do ...". At first this seemed like nit-picking, but the more I thought about it, the more it made sense. When we add a feature that "lets" the user do something it's like we're doing them a favor, and whatever we give them they better appreciate it. On the other hand, if we say we're going to "help" them do something, then suddenly it's up to us to do a good job or else we've failed.

I was tempted to go to Amy's next talk on user interface design but I decided to aim for variety (it's the spice of life, after all!) and went to Concurrency Control in Relational Databases by Arjen Lentz of MySQL. The actual material didn't contain much new to me, but it was interesting to hear how different databases (MySQL, PostgreSQL, Oracle, etc.) handle concurrency. Again, I think Suneido compares quite well with the big guys. It always shocks me to hear that most databases default to an isolation level of read commited or repeatable read - levels 2 and 3 (out of 4). The bottom line is that transactions are only partly isolated (you only get some problems!). The reason given for this is performance. But isn't this premature optimization? Is it really too slow otherwise? Wouldn't it be better to default to the highest level and then if it turns out to be too slow, then you could consider "compromises"? Suneido is always level 4 - serializable i.e. totally isolated. So far, for our uses, this isn't "too slow". After all, as the saying goes, if it doesn't have to be "correct", you can make it as fast as you want!

Apple Rules

I expected to see a lot of laptops at OSCON. (Including the strange and rude practice of emailing and chatting on your laptop during sessions that you've paid a lot of money to go to.)

I also expected to see Linux running on most of them. After all, this is OSCON. I was a little embarassed about running Windows on my laptop. (I dual boot with Linux, but Windows is my primary environment.)

What surprised me was the number of Apple Mac laptops. These were definitely the machine of choice with both attendees and speakers. Although there were lots of other brands around, I'd guess about half the machines were Macs. That's way more than their overall market share. Although Macs do run a version of unix, they also run a lot of proprietary software. Something must be attractive enough with these machines to overcome open source fans' resistance to non-open software.

I discovered the Apple store downtown in Portland so I got a chance to fondle the toys. I have to admit these are very sexy machines. I especially like the looks of the new black or white 13" Mac laptops. (I think the silver ones look too much like every other machine.) They are very sexy pieces of hardware with a very sexy gui. I want one! I also love the Mac Mini - such a contrast to big clunky desktop boxes.

Wednesday, July 26, 2006

OSCON 2006 Day 3

The day started off with a few keynotes from various people - nothing too exciting.

The first session I went to was about building an online game with Ruby on Rails (llor.nu - unroll spelled backwards). It was quite interesting and entertaining but I didn't learn too much.

The next session was on using Capistrano to deploy Rails applications. It was a good overview of a tool that I think we'll want to use on our Rails project.

After lunch was a session on a new way to build query interfaces. It wasn't anything earth shaking, but there were a few interesting ideas on how to give users more feedback as they enter into a search form and support doing it incrementally.

Next was another presentation by Stuart Holloway on Streamlined. This was mentioned in his Ajax on Rails tutorial, but he went into more details in this session. This is brand new software, barely released, but it promises to make building simple database apps with Rails almost trivial.

For something different I went to a session on Xen - open source virtualization software for Linux, similiar to VMware.

My last session of the day was on wxPython. I'm not that interested in Python, but I am interested in wxWidgets since it seems like the best bet for a cross platform gui for Suneido.

Tuesday, July 25, 2006

OSCON 2006 Day 2

This morning's tutorial was about Ajax on Rails by Stuart Holloway. He talked about the Prototype and Scriptaculous Ajax libraries and about Rails RJS templates. I haven't actually done any Ajax development - just read about it and used Ajax web sites. It's pretty cool stuff and with these tools it looks like it's getting a lot easier to do. And it didn't hurt that Stuart was an entertaining speaker.

For the afternoon I had signed up for a tutorial on the RT (Request Tracker) ticketing system. I recently bought their book although I haven't read it yet. My company has a homebrew ticketing system that works pretty well, but if nothing else, I figured I might get some ideas. One negative (for me) is that it's written in Perl which I'm not very familiar with. I wasn't as excited by this talk but it was still interesting. One of the things I was curious about was what techniques they used to make RT customizable and extensible but, unfortunately, I didn't get too many good ideas on this front.

There were a number of people speaking in the evening. I skipped most of them but snuck in and out for Kathy Sierra's talk on Passionate Users. I knew of her from the Head First series of books. If you haven't seen these books, they're worth checking out. She gave a talk that was entertaining and inspiring (if you care about users).

Monday, July 24, 2006

OSCON 2006 Day 1

Today was my first day at OSCON. This is the first time I've signed up for the tutorials at a conference. Monday and Tuesday are tutorials and then the actual conference sessions start on Wednesday.

This morning I went to a tutorial on Subversion API's. I had an idea that we might be able to use Subversion for Suneido's version control. Suneido currently has its own version control but it has a few limitations and there would be some advantages to using Subversion. The big hurdle, I realized during the session, was that Subversion assumes the working copy is stored in files in the file system. But Suneido stores the working copy in libraries in the database. One option would be to modify/extend Subversion to access working copies differently, but it doesn't sound like that would be easy. Another option would be to use the repository interface directly and ignore the existing client/working copy stuff. This would probably be the way to go, but still not easy.

In the afternoon I went to the Rails Guidebook tutorial by Dave Thomas (of Pragmatic Programmers fame) and Mike Clark. The big problem in this session was that so many people had their laptops plugged in that the breaker blew several times! Eventually they asked everyone to unplug. Most of the material I was already familiar with, but there was some newer Rails stuff and some explanations and background that were useful. It really got me thinking about implementing some of these things in Suneido. For example, ActiveRecord (or at least a simple version), migrations, scaffolding generation, rxml, etc.

For me, the best part of conferences is getting ideas. If I just wanted to learn some software I'd rather get a book and try it out. Of course, ideas are one thing, actually implementing them is a whole different story. Not that good ideas are easy and a great idea can be worth a lot. But unless you can implement them, they're not worth much!

PS. The other "downside" to my ideas is it tends to make my programmers groan and roll their eyes and say "oh no, not another epiphany!". This is, of course, because one of my ways to get ideas implemented is to get them to do it :-)

Wednesday, July 12, 2006

Burning CD's from Windows XP

Recently I saw a tip on how to burn CD's from Windows XP. I was surprised because I didn't realize Windows XP had this capability - I thought you had to use Nero or some other software.

The other strange thing was that the options they were talking about didn't show up on my desktop machine (although when I checked I could see them on my laptop).

The problem was that in the Properties of my drive, "Enable CD recording" wasn't turned on. Maybe because I installed the drive myself, although you'd think it would be enabled by default. Once I turned this on the options appeared.

http://www.microsoft.com/...

And you can even use Send To (the topic of my previous post)

I also found there's a Power Toy (although it's "unauthorized") to let you burn ISO images, which was actually what I wanted to do. (I was making Ubuntu 6 cd's.)

http://isorecorder.alexfeinman.com/isorecorder.htm

These new options require a lot less steps than using Nero, so I'm happy :-)

Adding to Send To menu on Windows

The Send To context menu option can be a handy feature. Occasionally it's nice to add to the choices e.g. a network folder you commonly copy files to.

But I can never remember how to add items. The trick is Start > Run > sendto

http://support.microsoft.com/default.aspx?scid=kb;en-us;310270

Monday, July 10, 2006

O'Reilly Open Source Convention - July, 24-28, 2006 - Portland, OR

I'm going to be in Portland for OSCON from July 19 to 29. If anyone is interested in getting together email me at mckinlay at axonsoft.com

Read more at conferences.oreillynet....

Friday, July 07, 2006

Stardock: ThinkDesk - Multiplicity

Here's a cool utility that lets you use multiple computers (I'm thinking my desktop and my laptop) with a single mouse and keyboard, but without "remote access" software (each computer still needs it's own monitor).

Read more at www.stardock.com/produc...

Update: Although it worked well, this seemed to mess up my laptop keyboard - the special "FN" shift key was "stuck" on. Exiting from the software didn't help, nor did uninstalling it. I ended up doing a system restore to the day before and that solved the problem. It's possible it wasn't this programs fault but it's awfully suspicious.

Tuesday, June 20, 2006

Tabblo.beta photo site

Another photo site like Flickr, but with a little different approach. Includes on-line editing tools. So far completely free with no storage or bandwidth limits.

Read more at www.tabblo.com

Sunday, June 18, 2006

Test Live CD's with free VMPlayer

Here is a neat use of the free VMPlayer:

http://www.vmware.com/vmtn/appliances/directory/374


It lets you boot Live CD's (e.g. different versions of Linux) from within Windows - cool.

NOTE: I haven't tried it yet so I can't comment on how well it works.

Saturday, June 17, 2006

Google Tools

I've been gradually using more and more Google tools.

I started with Desktop Search. I find I use this as much to run programs as to search for files. My Start menu has grown so big that it's a lot faster and easier to type a few letters into the search box. I don't use much of the Desktop other than the search feature. The "sidebar" and "gadgets" are nifty but I find them more distracting than useful.

Next I switched my primary email to Gmail. I've talked about that previously. I'm still pretty happy with it. I like having my email available wherever I am. I use the POP access to backup my email to a copy of Thunderbird on my desktop. My wife recently got fed up with spam on her email account and I set her up with Gmail, forwarding her old email to it so she didn't have to change addresses.

I use Picasa for organizing, viewing, and tweaking photos. I have other more "powerful" tools, but I find I use Picasa a lot of the time. It makes posting a picture to my blog quick and easy.

Recently I've started to use:
Here's my customized homepage:

I'm still using my Palm calendar as well so I'd like to find a way to sync it with Google Calendar.

And of course, I'm using Blogger, another Google tool. And Google Maps and Google Earth and Google Toolbar for Firefox.

With more of my tools in the browser, Google Browser Sync is a very handy Firefox extension. It keeps my bookmarks, passwords, and cookies in sync between all the machines I use.

Wow, until I started writing this entry, I didn't realize just how many Google tools I was using!

I am using a few Yahoo tools as well - del.icio.us for my bookmark collection and I've played a bit with Flickr for photos.

Someday, I'd like to move to Linux and/or Mac as my primary desktop (instead of Windows) so I like using web tools that are available (almost) anywhere there's a browser. (I did have some trouble using Gmail from some older machines when travelling. I learnt to ask for a Windows XP machine at internet cafes.) It's too bad that some of the Google tools are Windows only, but they are porting some stuff to Linux, like Picassa and Earth.

Saturday, February 04, 2006

VMware Player

I've been playing with VMware's new free "player". It was announced on Dec. 12 but I just discovered it recently.

You need the full version to create virtual machines but there are free virtual machines available that other people have created. The two I've tried are the Browser Appliance and Ubuntu 5.10. (The Browser Appliance is actually a stripped down Ubuntu + Firefox.)

I already have my computer set up to dual boot into either Windows or Ubuntu Linux, but rebooting is a hassle. Now I can work on both Windows and Linux at the same time. This is really useful when you want to create software that runs on both platforms.

Warning: These are big downloads - the player is 35 mb, the Browser Appliance is 258 mb, and Ubuntu 5.10 is 520 mb.

Conclusion - good stuff, check it out.

PS. There's also a beta of the free VMware Server software.