Monday, December 31, 2007

There's More to Software Design

There's more to software design than just the "mechanical" aspects.

This article by Mark Hamburg about Lightroom's Goals should give you an idea of what I'm talking about. (And I think they've been fairly successful with this in Lightroom.)

I struggle with this with my company's vertical application, partly because it's hard to get people to see that issues like style, grace, and elegance are relevant to a business application. I think they are. I don't mean it has to be "pretty" or "artsy". But it should look good, flow well, be smooth not awkward. Part of this is definitely the mechanical aspects but part of it is more subtle things. I'm reminded of "quality" in Zen and the Art of Motorcycle Maintenance.

Saturday, December 29, 2007

Still Learning

Even though I created Suneido and have used it pretty heavily for years, I still find myself learning better ways to apply it. (I think that's part of the reason I like programming.)

Suneido uses the open source Scintilla editor. ScintillaControl is the "wrapper" that interfaces Scintilla to Suneido's user interface framework. I needed to add a new method to it today:

    LineEndExtend()
{ .SendMessage(SCI.LINEENDEXTEND) }

I noticed I had a lot of these methods and I started wondering whether there wasn't a better alternative to adding so many simple repetitious methods. Ruby, and especially Rails, which I've been involved with on another project, make heavy use of "catching" calls to missing methods and implementing them.

I was able to replace all these simple methods with:

    Default(method)
{
f = method.Upper()
if not SCI.Member?(f)
throw "method not found: " $ method
return .SendMessage(SCI[f])
}

"Default" is Suneido's way to "catch" calls to missing methods.

Then I realized I could generalize it to handle methods with arguments:

    Default(@args)
{
f = args[0].Upper()
if not SCI.Member?(f)
throw "method not found: " $ args[0]
args[0] = SCI[f]
return .SendMessage(@args)
}

"@args" is used to capture all the arguments and then pass them again. args[0] will be the method name.

This allowed me to remove a bunch of methods and I won't have to add any more in the future.

In addition, I noticed I had a lot of calls like .SendMessage(SCI.GETLINECOUNT) within the wrapper code. These could now be simplified to be like: .GetLineCount()

This would all fall into the category of "refactoring" since I'm improving the code without changing its behavior. (Strictly speaking, the behavior has changed slightly, but not in a way that should affect existing code unless someone is doing something unusual.)

I guess you'd call this refactoring something like: "Replace explicit methods with catching missing method calls."

Tuesday, December 25, 2007

More on Scratch

A few comments on Scratch:

I'd really like to be able to browse the code for the projects on the web site. (Unless there's some way I missed.) You can download the projects and presumably see the code that way but that's a bunch more steps and not very good for exploring. Since there isn't much documentation, it would be helpful to quickly look at other people's code. It doesn't seem like this would be hard to add.

Apart from the convenience, I think this is important for deeper reasons. Programming, and thinking "like a programmer" are as much or more about reading code as writing it. Seeing other people's results can give you ideas and inspire, but seeing how they did it is going to be a huge benefit too.

A suggestion for Scratch itself is to get rid of the traditional open/save file management. Alan Cooper in About Face 3 makes a good case for why open/save sucks. I never have to "open" or "save" in Lightroom. Gmail and Blogger save automatically. I don't have to pick/navigate to a directory in Google Docs. In a product for kids especially, you could avoid a bunch of issues by saving automatically to a standard location.

Finally, it's too bad Scratch is so rigid with respect to screen/window sizes. I can understand why they did it that way - it's a lot simpler than trying to use vector or higher resolution images and make things resizable. And for educational purposes maybe it doesn't matter. (Although I notice a number of people wanting to run it 800x600.) Nonetheless, it was a bit disappointing when I ran my program full screen for the first time and got a jagged grainy image (as a result of simple resizing of the low resolution stage). Maybe I'm just spoiled by things like Mac OS X's resizable icons.

Monday, December 24, 2007

Something Fun for Christmas

I recently discovered Scratch a programming system for kids, something like Logo.

I decided since it was Christmas I should do something fun and try it out. Here is my first "program":

Sunday, December 23, 2007

Ubuntu Networking Resolved

This really shouldn't have taken so long. It wasn't even that difficult. But when you only spend a few minutes on something and only every few days or weeks, what do you expect! And the issues with Parallels and Leopard didn't help.

As Larry suggested, the "expert" solution was to edit /etc/network/interfaces and change:
#iface eth0 inet dhcp
to:
iface eth0 inet dhcp
i.e. uncomment it.

As he also suggested, there is a way to do this from the GUI. When I went to System > Administration > Network I saw this:


[Notice the title bar says "Network Settings" although the menu option was just "Network". I always give my programmers heck for that kind of inconsistency.]

"Roaming mode" ??? I selected Wired Connection, clicked on properties, and changed it to:


[Yet more inconsistencies - I selected "Wired Connection" but I got "eth0".]

i.e. un-checked roaming mode and picked DHCP.

This has a similar effect to the "expert" method, adding a line to /etc/network/interface:
iface eth0 inet dhcp
I can see roaming mode might be a good choice for laptops, but it seems odd that it installed this way. Maybe something in Parallels makes Ubuntu think it doesn't have a regular wired connection. It would be nice if the network icon options at the top of the screen included an option to "save" your choice of wired networking (or just did it automatically).

Now when I reboot I still have a network connection. The tooltip on the network icon now says "Manual network configuration" which doesn't seem quite right to me - DHCP is pretty automatic. But I guess it's more "manual" than "roaming mode" (whatever that is).

I feel a little stupid at not having sorted this out myself right from the start but you can't win 'em all, I guess. Thanks Larry!

Leopard Falters

I spoke too soon about no problems with Leopard. I forgot one major part of my setup - my Epson R1800 wide format photo printer.

I went to print a photo for a Christmas present and found ... no printer. Installing Leopard had silently removed my Epson printer driver. (The CUPS + Gutenprint driver was still there, but I only use it to handle printing from Windows under Parallels.)

I can see a driver not being compatible with a new version of an operating system, but to just silently remove it seems pretty lame. Ideally it would warn you at the start of the install so you had a chance to abort the upgrade if you wanted. At the least it could notify you that it had removed your printer!

Luckily, I had waited long enough to upgrade that Epson had released new drivers. (They were released on Dec. 18 - if I had upgraded a week earlier I'd have been screwed.)

All's well that ends well - my printer is working again and I got my Christmas present done :-)

Thursday, December 20, 2007

A Successful Leap for Leopard

I upgraded my MacBook to Leopard a while ago but I waited to upgrade my main MacMini.

With Leopard updates for my main apps (Parallels and Lightroom) I decided it was time to take the plunge. The upgrade went smoothly, although it seemed pretty slow - several hours. I'm not sure why it takes that long.

As a safety precaution I used SuperDuper to backup each machine before upgrading.

So far I haven't had any major problems. The first time I started Parallels I got the following error:


I found a blog post which said the MacFUSE included in Parallels is old and suggested installing the latest MacFuse. This seemed to do the trick, but:
  • the error message seems backwards - the operating system was new, MacFUSE was old
  • why didn't the Parallels update for Leopard include the required new version of MacFUSE?
  • why did I have to get the solution from some user instead of from Parallels? even if the user community discovered the solution, wouldn't it make sense for Parallels to post it? (in fairness, maybe they have, but I didn't find it if they did)
Note: This problem doesn't stop Parallels from starting, it just stops it from mounting the Windows C drive in OS X

Now that Spotlight with Leopard lets you run the top application match by hitting enter, I was able to uninstall Google Desktop. (Nothing against Google Desktop, I just prefer to keep things simple if I can) (see my previous post)

Leopard also seems to have solved the issue of automatically mounting network drives. (see my previous post) so I was able to remove the login script I had created to do this, which was nice because it took a long time (why?) and slowed down logging in.

I do have a new complaint about OS X. The Finder doesn't have an option to show hidden files. I can understand hiding them by default, so does Windows. But at least Windows gives you a way to show them. This came up when I went to copy the .svn folder from a backup. It is possible to change Finder via a command line, but on top of not being user friendly, this also requires restarting Finder. This seems like an obvious weak point. Is there someone in Apple who refuses to recognize that you might occasionally want to see these files?

I'm still having problems with accessing my 4gb USB thumb drive from Parallels. At first I blamed this on the U3 software that came installed on it, but I removed this and reformatted and I'm still having problems. It works fine on my Windows machine at work. My current guess is that Parallels doesn't quite handle 4gb USB drives. The strange part is that it works fine, but after a short time it will hang during copying from it, and Windows Explorer can no longer access it. My 1gb USB thumb drive continues to work fine.

Ok, back to Ubuntu on Parallels. I copied the virtual machine that I had created on my MacBook over to my MacMini and started it up. No display problem, but the same problem with the Parallels Tool cd image showing garbled file names. Strangely I can't find anybody else with this particular problem. While flailing a bit more I rebooted the VM and lo and behold the Parallels Tools cd image had the right file names. I installed them and restarted X windows. It appears to work! One of the most noticeable features is being able to move the mouse seamlessly between OS X and the VM. I followed the same process on the MacBook and it also worked (although I could have sworn I tried rebooting before). So I appear to be back in business with Ubuntu (albeit starting from scratch with a new VM).

All in all, a successful day!

Saturday, December 15, 2007

CouchDB

I found CouchDB referenced from one of the posts about Amazon's SimpleDB since they are both apparently written in Erlang.

It's interesting that people are exploring some alternatives to relational databases.

It's also interesting that people are implementing "real" products in alternative languages like Erlang.

I recently picked up Programming Erlang but I haven't read it yet.

One thing that caught my eye looking through the CouchDB web site was a brief note that they compact the database while it's running, by copying to a new database. Currently Suneido requires you to occasionally shut down the database in order to compact it. I had always thought about "on line" compaction in terms of doing it "in place", but that gets tricky due to updating indexes to point to new locations. But if you build a new copy you don't have that problem. You could copy the bulk of the database in a single read-only transaction (like the current on-line backup does) and then pause activity briefly to get any updates done during the copy, and then switch over to the new database. Hmmm... actually doesn't sound too bad. (famous last words!)

Friday, December 14, 2007

Amazon SimpleDB

Amazon has announced a new service - SimpleDB

We are pretty happy with our use of Amazon's S3 (Simple Storage Service)

I've been curious to try Amazon EC2 (Elastic Compute Cloud) but I haven't found a good application yet.

One of the big limitations with EC2 is that it's not well suited to running database servers. I've been waiting for them to improve support for this, but instead (or at least, first) we get SimpleDB.

I wonder if someone will make Rails work with SimpleDB as the database? How would the performance compare?

Thursday, December 13, 2007

Finally!

Finally some good progress on the ACE version of the Suneido server. It's actually working well enough to run a client IDE from it, a major milestone. The last couple of problems were minor mistakes of mine. ACE and the Boehm GC seem to be working together.

Of course, this is just the start, now comes the "fun" part - actually making my code thread safe.

Saturday, December 01, 2007

ACE + GC Progress

I spent a few more frustrating hours thrashing around trying to build and link with ACE statically.

Finally, I decided to start from scratch. Strangely "make clean" didn't clean up (as I discovered when a make after make clean didn't recompile!). (Note: Don't run make clean from the top level ACE_wrappers directory - it takes forever recursing into all the examples and tests.)

I rebuilt and ... it worked! Somehow I had still been getting left over shared library stuff. Another requirement is to #define ACE_AS_STATIC_LIBS before include the ACE headers.

Boy, that shouldn't have been so hard! But I can't really blame anyone but myself :-)

But ... now Suneido crashes right away on startup, which seemed more like a step backwards not forwards!

I created a small test program that used ACE and GC. It crashed the same way. Eventually, after another few hours of flailing I hit on the right combination. The key seems to be to initialize GC first, then ACE. But to achieve that, you have to prevent ACE from redefining "main" to do their startup. Here's my successful test program:

#define ACE_AS_STATIC_LIBS 1
#include "ace/Thread_Manager.h"

static ACE_THR_FUNC_RETURN thread_func(void* arg)
{
for (int i = 0; i <>
operator new(10000);
return 0;
}

extern "C" { void GC_init(); }
#undef main
int main(int argc, char**argv)
{
GC_init();
ACE::init();
ACE_Thread_Manager::instance()->spawn_n(2, thread_func);
ACE_Thread_Manager::instance()->wait();
}

At this point I'm quitting for the day. It should be easy to incorporate what I've learned into Suneido, but then I'll just run into the next problem. I'd rather end the day on a positive note!

Friday, November 30, 2007

Two Steps Forward, One Step Back

Or should that be One Step Forward, Two Steps back?

Yesterday I was back to working on the multi-threaded ACE Suneido server. It started off really well. I got the server actually running, could connect and disconnect, and make simple requests and get responses. A very good milestone.

The obvious next step is to try to start up a Suneido client IDE using the new server. This is a big jump because even to start up requires a large number of requests of different kinds. I didn't really expect it to work, but programmers are eternal optimists :-)

I got a variety of weird errors and memory faults - not surprising. But amongst these errors was one that said something like "GC collecting from unknown thread". Oh yeah, I knew this was going to be an issue but I'd pushed it to the back of my mind. The garbage collector needs to know about all the threads in order to take their stacks and registers etc. into account. The way the Boehm GC normally does this is by requiring you to call their create thread function which is a wrapper around the OS one.

The problem is, ACE is creating the threads. I found where ACE calls create thread and thankfully there was a single point I could modify to call the GC version. But, I was using ACE via a DLL which means it can't call functions in the main program (where the GC one is).

The obvious solution is to not use a DLL, to statically link in the ACE code. Sounds easy. I even found the option "static_libs=1" that would build static libraries. But it doesn't work. It builds a static library alright, but when I try to link it into Suneido I get a bunch of "multiple definitions" and "undefined references". Suspiciously, many of the names were "_imp_..." which seems a lot like the way DLL's work. My guess would be that "static_libs=1" isn't working fully, which isn't too surprising given that the "normal" usage is with shared libraries (DLL). In software, "taking the path less traveled" is often a recipe for frustration.

I started digging into the maze of headers, configs, makefiles, and ifdefs but I ran out of time. Presumably it's solvable. You can see why people like to use things like .Net or Java where at least some of these low level issues are handled for you.

At the same time as I was working on this, I downloaded Ubuntu 7.10 and created a new Parallels VM (on my MacBook while I was working on my main machine). I used the alternate cd with the text based installer as recommended by other people. It went quite smoothly (except for crashing OS X the first time I started the install), and no display problems. But when I tried to install the Parallels Tools, the disk image appeared "corrupted" - only a single file and its name was random garbage characters. I tried rebooting the VM, restarting Parallels, and rebooting the MacBook but it didn't help. I searched on the web but didn't find any references to this problem. I have upgraded my MacBook to Leopard (the new version of OS X) so the problem may be related to that. When I get time I'll try running this VM on my Mac Mini which hasn't been upgraded to Leopard yet.

Thursday, November 29, 2007

Still Problems with Ubuntu 7.10 on Parallels on Mac

Larry has been patiently trying to help me with the network problem on my Ubuntu virtual machine (see the comments on Ubuntu on Parallels on Mac) He may have even found the problem.

But when I applied the fix and rebooted I was back to the problems from More Fun With Ubuntu on Parallels - unable to boot because the X display won't start. I did some more thrashing around and some more web searching. Lots of people seem to have this problem and there are various proposed solutions. I wonder if these "solutions" aren't really solutions - it just happened to work as it does for me occasionally. Frustratingly, lots of people also appear to not have this problem - it works fine for them.

This is on a Mac (mini) so you can't blame non-standard hardware. Ubuntu is one of the most common (if not the most common) version of Linux. I haven't loaded up OS X with a lot of junk software. So why am I still faced with these frustrating issues? I don't want to have to be an X Windows expert and mess around with xorg.conf. Running in a virtual machine adds some complexity, but it also reduces the complexity since the virtual machine is more "standard" than real machines.

I don't have a lot installed in my Ubuntu so I am going to try a fresh install. At least with VM's this doesn't mean I have to wipe out my previous one. It would be nice if Parallels had a prebuilt Ubuntu 7.10 VM but they only have 7.04. I could go back to 7.04 but sooner or later I'll want to update.

Some of my recurring frustrations are, no doubt, due to my staying to close to the "bleeding edge". But, (a) I want to keep up with the latest - that's part of my business, and (b) not doing updates has it's own problems with incompatibility, security, etc. and leaves you facing even scarier "big" updates, albeit not so often.

Groovy or JRuby

http://martinfowler.com/bliki/GroovyOrJRuby.html

Tuesday, November 27, 2007

Slow Progress on Multi-Threading Suneido

Last time I worked on this I had just reached the point where I could successfully compile and link. I resisted the temptation to try running it because I figured there were bound to be problems.

Sure enough, I run it ... and it crashes with a memory fault. No big surprise. I put in a few print's to see where it's getting to. Hmmm ... it's not even getting to my main!?

Download and install MinGW GDB and use it to find it's crashing in the garbage collector code. I am redefining operator new to use the garbage collector so presumably something is calling operator new during static initialization. I comment out my redefinition and it works. I make a tiny test program:

#include "include/gc.h"

void* stat = GC_malloc(10);

int main(int argc, char** argv)
{
return 0;
}

It crashes. But this works in the current version of Suneido. It must be thread related. Yup, my test program runs fine without threads. Now what?

I search the web to see if this is a known problem but I don't find anything.

I'm still using gc-6.5 and the notes for the latest gc-7.0 mention improvements to win32 threads.

So I download gc-7.0 The readme says MinGW is not well tested/supported - ugh. For gc-6.5 I had ended up writing my own makefile but I'd prefer to avoid that. The recommended approach seems to be the standard configure & make so I try this with MinGW MSys.

configure seems to succeed, at least with no obvious errors, but make fails with a bunch of "undefined references". It appears to be trying to make a dll which I don't want - I want a static library. Eventually I hit on configure --enable-shared=0 which avoids the dll stuff but still gives a bunch of "undefined references". This time they all appear to be from win32_threads.c For some reason this isn't getting included in the build. I uncomment am__objects_3 = win32_threads.lo in the generated Makefile to fix this. That's probably not the correct solution but it does the trick and I finally get the build to succeed. gctest runs successfully although it seems slower and in the output the heap is twice as big as with gc-6.5 - not good, but I'll worry about it later!

Thankfully this effort wasn't wasted and my test program runs successfully. And Suneido now manages to get to main! But then it fails with ACE errors saying WSA Startup isn't initialized. This is easily fixed by adding ACE::Init but it's strange because I didn't need it in my previous ACE test program.

After most of a day's work I'm finally back to where I can start debugging my own code! It's great to be able to leverage other people's work (like the Boehm GC and ACE) but it can be extremely frustrating when they don't work and you don't have a clue what's going on. Even the standard configure & make has the same problem. If it works it's great, but if it doesn't you're faced with incomprehensible makefile's.

Sunday, November 25, 2007

Positive Feedback for a Change!

It seems like all I do is complain about my frustrations with computers so I thought I should post a positive comment for a change.

I went to check what version of g++ I had on my Ubuntu on Parallels. Here's what I got:

andrew@MacMini-Ubuntu:~$ g++ --version
The program 'g++' can be found in the following packages:
* g++
* pentium-builder
Try: sudo apt-get install <selected>
bash: g++: command not found

This is a vast improvement over just "command not found". Thumbs up to Ubuntu (or Linux or wherever this originated).

Another Mac Printer Annoyance

Just when I thought I had my printer problems figured out, I have a new one.

If I boot up with the printer turned off i.e. turn it on after OS X has booted, then it won't work. Not only won't work, but hangs OS X for several minutes. The first few times I thought OS X had crashed but if I'm patient enough it will come back. It's especially frustrating when I inadvertently leave Lightroom in the Print module because then it hangs when I run it. Or if I forget and switch to the Print module.

It might not be so bad if I could put the printer on the power bar with the Mac so it would get turned on at the same time. But you're not supposed to shut off the power to the printer without turning it off and I'm sure I'd forget.

It's strange (as usual) because the printer is connected to the Mac with USB - which should (and did before) handle turning on later. Maybe it's related to the work around I had to do to access the printer from Parallels since that is network related. (I'm thinking network because that's the only thing I can think of that would hang for several minutes.) I guess I could try disabling or removing the extra printer I have set up for that but I'm not sure I'm in the mood for it. And I'd probably just break something else.

It seems like a lot of stuff these days assumes you're going to leave it turned on all the time. But that's not great for energy efficiency.

Saturday, November 24, 2007

More on Multi-Threading Suneido

I am continuing to intermittently work on making a multi-threaded Suneido server (that will also run on Linux as well as Windows).

So far, I have stripped out all the Windows dependencies and just about have the socket stuff converted to ACE. It should not take too much more to get this working - but only single threaded.

In a sense, this is the easy part. The hard part is to add the required locking etc. to allow it to run safely without the multiple threads interfering with each other.

There are two sides to this - the database and the language run time (virtual machine).

On the database side, the data itself should not be a problem because it is immutable (never updated). When records are updated the new version is added to the end of the database file. (This is the main reason the database needs to be "compacted" periodically.) The primary reason for this is to support the multi-version optimistic database transactions but it also ends up being nice for multi-threading.

The indexes are the main issue in the database. The easiest solution is probably locking at a fairly granular level of entire indexes. There are schemes for locking individual index nodes, but this is tricky. It should be easy to use multiple readers OR single writer locking. The downside is that if there is a lot of updating it will end up being single-threaded again.

Ideally, I would like to support multiple readers concurrently with updating (but still only one writer at a time) similar to how multi-version optimistic transactions allow multiple readers to operate independently of updates. But I have not figured out any "easy" way to make the indexes similarly multi-version so readers are not blocked by updates.

This may not be critical because I am pretty sure read's are far more common than update's. I should really measure this instead of guessing!

The other side is the language virtual machine. This should not be too bad because there is not much shared mutable (updatable) data. The main shared data structure is the table of global definitions loaded from libraries. The only time this is modified is when a new definition needs to be loaded from a library. At one time I thought I could use the double checked locking pattern (DCLP) to avoid synchronizing every access, but DCLP has been found to be fatally flawed. In theory, with 32 bit values and an idempotent function it is still workable, but given the history it seems risky. Another way to avoid the synchronization overhead would be for each thread to have its own globals table "cache" and to load this from a shared synchronized table.

I am sure there are many other fun issues lurking in this project. I am still very paranoid about synchronization issues that do not show up until after deploying it to hundreds of customers. My first line defense will be to try to keep the locking simple so I can have a fair chance of convincing myself logically that it is "correct". (Although judging by DCLP, even very smart people can fail to catch concurrency flaws in even simple code.) The next line of defense will be some serious stress testing, probably on something like a quad core machine to increase the chances of conflicts.

Thursday, November 22, 2007

The Chumby Has Landed

My Chumby finally arrived. I was happy to see natural cloth packaging instead of yet another frustrating bubble pack. Of course, the first thing it wanted to do after I set it up was to download a bunch of updates. [aside: Automatic updates seem like a great idea, and they would be if they were unobtrusive. But every time I try to do anything on my computers something wants to do an update and disrupt me while it does it. It's especially bad on machines that I don't leave turned on and don't necessarily use every day.]

Here's what I have currently playing on my Chumby:



I'd like to make my own widgets but it looks like that requires using Flash which I haven't done before. For example, the Chumby would make a great (although expensive) status monitor for our automated tests.

Monday, November 19, 2007

Amazon's Kindle Released

Amazon has released its Kindle electronic book reader

And once more, it's US only :-( That may be partly because it uses EV-DO cell phone technology to connect, although you'd think it would make sense to support other methods like regular WiFi. It does have a USB connection but I'm not sure if that means you can load books from your computer.

They have monthly fees for some of the services and that's a little scary. But they don't charge you (directly) for the EV-DO so that's nice.

We'll have to wait and see how much Amazon locks the device to their services and how much they open it up. If they were planning to make their money off the service then you'd expect a lower price (like cell phones). If they expect to sell it for full price AND lock you into their service that won't be too attractive.

Sunday, November 18, 2007

Web Services

A few years ago when "web services" were starting to get talked about I read Web Services Essentials which covers XML-RPC, SOAP, UDI, and WSDL. I ended up writing a simple XmlRpc implementation for Suneido and we use it to distribute a particular service for our vertical application.

More recently I just finished reading RESTful Web Services (recommended). I realize now, that for our application there was no reason to use XmlRpc, that a simple RESTful web service (i.e. just using GET and POST) would have worked just as well and been much simpler. On the positive side, at least I didn't try to implement SOAP, UDI, and WSDL!

REST stands for Representational State Transfer which doesn't tell you a whole lot. Basically it is a resource oriented style using basic HTTP GET, POST, PUT, and DELETE.

It seems strange that Web Services Essentials didn't even mention the option of a REST style web service. It's only recently that REST has gained popularity, but you'd think they'd at least mention that you can just use GET and POST. Maybe they assumed that you already knew about that option, but it was all new to me at the time so I assumed the book covered the options.

Of course, if you want to communicate with an existing service you have to use whatever they supply (e.g. SOAP). But for our application we were controlled both the server and the client so we were free to use whatever we wanted.

David Heinemeier Hansson, the originator of the Rails framework for Ruby, is a fan of REST and the latest version of Rails has support for this style.

After reading RESTful Web Services I've been working on improving Suneido's HttpServer to make it easier to implement REST services. We need a new service for our application and it seems like a good opportunity to try a different style.

Bug Labs

I recently listened to a podcast with one of the people from Bug Labs. I like gadgets and this looks like a pretty nifty one.

When I was a kid I did a lot of hardware hacking, building computers and other gadgets. But I haven't done any hardware for a long time. This looks like something where you could build a "gadget" without actually getting your hands dirty.

I have an idea for a hand held gadget I'd like to make with a GPS. Most hand held GPS units aren't programmable so they won't work. Another option would be to use a cellphone with a GPS but from what I've heard software development isn't easy. And I'm not sure I want to be tied to a particular cellphone, especially if I wanted to resell these gadgets.

A "Bug" seems like a great way to develop a prototype. All I'd need for the gadget I'm thinking of would be the base unit plus the GPS module. They even plan to offer a service to convert your Bug "prototype" to a more packaged gadget that they will manufacture.

Unfortunately, it's not available yet, but the web site says 4th quarter of this year so it shouldn't be too long. And they haven't published any prices yet either.

Saturday, November 17, 2007

Chumby's Coming

I found someone in the US to order my Chumby for me and it is in the mail on its way to me (hopefully the post office hasn't lost it - it's taking a while).

Dave Winer got his and is pretty positive.

Thursday, November 15, 2007

More Fun With Ubuntu on Parallels

I started my Ubuntu virtual machine to get my networking file to continue looking into the network not starting automatically.

A notice came up about upgrading from 7.04 to 7.10. Without really thinking about it (stupid) I went ahead and ran the upgrade. It worked fine till it was finished and it restarted and the X display couldn't start. So I did what I should have done before I started the upgrade and I googled for problems with 7.10 on Parallels. Sure enough, lots of other people were having the same problem. There were various suggestions of how to work around the problem but there didn't seem to be a consensus. Parallels themselves seem to be avoiding the issue. I followed one of the suggestions and booted in recovery mode, which took me to a terminal window but I wasn't sure where to go from there. Then I tried booting the older kernel which seemed to start ok. Then I tried re-installing the Parallels tools (another suggestion). It ended up with the same problem of not being able to restart the display. I stopped and restarted the machine and let it boot normally and it worked!? I have no idea what that means. Is it fixed? Which part of my thrashing around was helpful? Or is the problem intermittent and I just happened to get lucky?

I'd had enough for one day so I just suspended the machine and left it.

By the way, it still has the same problem of not starting the networking automatically. I guess it was too much to hope that problem would go away by itself.

Wednesday, November 14, 2007

Groovy Style "builders" in Suneido

One of the neat features in Groovy is its "builders". For example, using an XML markup builder:
builder.invoices
{
invoice(number: 1234)
{
item(type: 'part')
{ product(name: 'widget', cost: 100) }
}
}
which produces:
<invoices>
<invoice number="1234">
<item type="part">
<product name="widget" cost="100" />
</item>
</invoice>
</invoices>
The builder doesn't actually have methods for "invoice", "part", etc. Instead, dynamic language "tricks" are used to catch "unknown" method calls.

It made me wonder how close I could come to this with Suneido. Here's what I came up with:
builder.invoices()
{
.invoice(number: 1234)
{
.item(type: 'part')
{ .product(name: 'widget', cost: 100) }
}
}
The main difference is that Suneido requires '.' on method calls. Otherwise it's pretty much identical. One thing the Groovy XML builder doesn't handle is tag contents containing a mixture of text and tags. I handled this in Suneido with a special '_' method. For example:
builder.p()
{
._('start ')
.b() { 'middle' }
._(' end')
}
which produces:
<p>start <b>middle</b> end</p>
Here is the entire implementation of a Suneido XmlBuilder:
class
{
New()
{ .s = '' }
Default(@args)
{
.s $= '<' $ args[0]
for m in args.Members(named:)
if m isnt #block
.s $= ' ' $ m $ '="' $ args[m] $ '"'
if args.Member?(#block)
{
.s $= '>'
result = .Eval2(args.block)
if result.Size() is 1
.s $= result[0]
.s $= '</' $ args[0] $ '>'
}
else
.s $= ' />'
return
}
_(s)
{ .s $= s; return }
ToString()
{ return .s }
}
One minor problem with the Suneido version is that certain methods are "built in" (e.g. Size) and therefore can't be used in the builder. [This is a result of trying to make class instances behave the same as generic containers. I'm starting to think this was a mistake, but I'm not sure how to go about changing something so fundamental.]

I had to make to make a slight fix to Suneido to make this work. The approach I used was to use instance.Eval(function) to evaluate the blocks in the "context" of the builder. But I found that Eval didn't work with blocks (only functions). Luckily it was easy to fix. (Actually, I'm using Eval2 which returns the result inside an object so you can determine if there was a return value or not.)

Tuesday, November 13, 2007

PyPy, LLVM, and Parrot

I recently came across some references to PyPy - a Python version/compiler (and more) implemented in Python. Interesting stuff, but a little hard to follow.

A reference from there led me to LLVM - Low Level Virtual Machine, which is actually a compiler (including JIT) as well as a virtual machine. Check out the tutorial on implementing a language with LLVM - very slick. They discuss garbage collection (including using the Boehm collector that Suneido uses) but this area appears to still be a work in progress.

Another project along these lines is Parrot - the new Perl virtual machine.

Monday, October 29, 2007

Groovy First Impressions

I recently picked up a copy of Groovy in Action. Groovy is a dynamic language based on Java infrastructure - it compiles to Java byte codes, runs on Java virtual machines, and has full access to Java libraries. You can call Java from Groovy and vice versa.

I just started reading the book, but the Groovy language looks interesting. There are some close similarities with Suneido, and of course differences. For example, this could be either Suneido or Groovy:

list = [1, 2, 3]
map = [name: "Fred", age: 24]


although Suneido would also allow this but Groovy wouldn't:

listmap = [1, 2, 3, name: "Fred", age: 24]


This is because Suneido uses a combined list/map data type whereas Groovy has separate list and map types.

BTW I like this map notation better than Ruby's, which uses a lot more punctuation:

map = { :name => "fred", :age => 24 }

Groovy's closures are also quite similar to Suneido's blocks:

Groovy: { arg -> ... }
Suneido: { |arg| ... }

I borrowed Suneido's block syntax from Smalltalk. The extra '|' before the parameters makes it easier to parse.

One feature I like is that you can write:

list.each { key, value -> ... }

whereas in Suneido you'd need parenthesis after the "each" function call:

list.each() { |key, value| ... }

This might not be too hard to add to Suneido since it's currently a syntax error (not ambiguous). In both Groovy and Suneido this is equivalent to list.each({...})

In addition to =~ for regular expression matching (the same as Suneido) Groovy also has ==~ which must match the entire string i.e. the same as "^(...)$". I'm tempted to add this to Suneido because it's a common mistake to omit the '^' and '$' and/or the parenthesis.

That's about as far as I've gotten. I'm not sure if I'll ever use Groovy but it's always interesting to look at other languages. And since the Java platform is quite ubiquitous, that makes Groovy more widely applicable. Groovy also has a web framework called Grails that is similar to Ruby on Rails.

Friday, October 26, 2007

Freebase and Cinespin

I recently listened to a podcast with one of the developers of Freebase. I've been meaning to have a look at Freebase for a while. One of the people behind it is Danny Hillis of Thinking Machines.

For an interesting application based on Freebase data, have a look at Cinespin.

Grazr - Reading List Management and Tools

An interesting tool I hadn't run into before:

Grazr - Reading List Management and Tools

Thursday, October 25, 2007

Amazing Open Source

The variety and quality of open source software even in specialized areas is pretty amazing.

Art Gallery - Art of Illusion

Monday, October 22, 2007

Use at Least Two Compilers

I recently made some minor changes to Suneido. I compiled with MinGW and it worked fine.

A bit later I compiled with Visual C++ 7 (2003) since that produces the smallest, fastest code at the moment. It wouldn't run at all - crashed immediately on start up!?

I recompiled the MinGW version - it still worked fine.

I reverted my changes and the VC7 version worked again so it definitely appeared to be my changes.

I checked my changes several times but they looked fine.

Finally I found the problem. While making my changes I had done some minor refactoring. (I know, you shouldn't mix the two, but it seemed minor.) I had found something like:
int fn1()
{
static const list = symnum("list");
...
}

int fn2()
{
static const list = symnum("list");
...
}
So I eliminated the duplication by moving the constant outside the functions:
static const list = symnum("list");

int fn1()
{
...
}

int fn2()
{
...
}
The problem is that this changes when symnum is called. The old way it wasn't called until the functions were used the first time. The new way it is called during startup, probably before something else that it needs is initialized. The order of initialization is undefined and obviously varies between compilers. Sure enough, putting this back fixed the problem.

Lessons:
  • build with multiple compilers to catch this kind of dependence on undefined behavior
  • don't mix changes and refactoring, do them one at a time, test in between
If I hadn't caught this now it could have suddenly and mysteriously broken at some time in the future (when something affected the order of initialization) and taken me ten times as long to find because I would have had no idea it was related to this particular change.

Saturday, October 20, 2007

From Ruby on Rails to PHP

7 reasons I switched back to PHP after 2 years on Rails - O'Reilly Ruby

A lot of the points he makes I find familiar from Suneido. Most of the time you can reproduce the needed parts of some other whiz-bang system in less time than it takes to learn the other system. It won't do everything that other system does, but if it does what you need, who cares?

We've run into some of the same frustrations he mentions with our Rails project (eTrux). We didn't have the problem of trying to deal with an existing database (we were starting from scratch), but it always seems like there's something you want to do that isn't on the Rails easy path. But despite the frustrations, I think Rails was a good choice for this project.

My Chumby! Or Maybe Not

I finally received my invitation to get one of the "insider" early release Chumby's.

I go to order, pick the color, get to the checkout, and find that it's United States only. Argh!

I realize Free Trade doesn't really mean free trade, but I find the US export regulations pretty frustrating and ridiculous. This thing is all open source software and I can't imagine the hardware is anything special - what are they protecting? I probably shouldn't put all the blame on the US either. Canada has it's own safety code and approval process that devices have to pass. To an ignorant consumer it seems like the US and Canada are similar enough that they could agree on a standard approval process, but I'm sure that's unlikely.

I'm not blaming the companies, I'm sure it's a hassle for them too. And US companies probably (rightly) see Canada as a minor market that's not worth much effort, at least initially.

Sigh. Maybe I can find someone in the US to order my Chumby for me.

(Don't get the wrong idea, it's not like I'm "desperate" for a Chumby. I'm sure I could live without it :-) But us geeks like our toys, especially if it's a new toy that not everyone has!)

[I see from my earlier post that I already knew it was US only. In the excitement of my invitation I obviously forgot that. Either that, or I'm just getting old and forgetful!]

Friday, October 19, 2007

Techdirt: Is It Copyright Infringement To Skip Commercials?

Techdirt: Is It Copyright Infringement To Skip Commercials?

Using S3 for Customer Backups

For some time we've been using S3 to back up our customers systems.

Originally we set them up to FTP the data back to our office. This was handy for us because if we wanted to look at their data we had it in-house. But as the number of clients and the size of their data grew we ran out of bandwidth. (Thankfully we did these transfers at night so it didn't slow down our daytime internet access.)

The other problem was the sheer size of the data and the issues of how to guard against disk failure. Although we don't promote this to our clients as a "backup" service many of them still end up falling back on us when they discover their own backups weren't done or were no good. And people often don't discover problems till days or weeks later so they need older copies, not just the most recent.

We decided to give S3 a try and so far it has worked out well. We're up to about 8 gb of data transfer per night and we currently have about 240 gb of storage. This is costing us about $50 per month - a bargain as far as I'm concerned.

We are currently keeping the last 8 days, last 5 weeks, last 13 months, and every year, or something like 25 - 30 copies per customer. On top of this redundancy, Amazon says they store multiple copies of each file.

A minor downside is that when we want to look at someone's data we have to download it, but that's not a big deal. And it's still a lot easier to download from S3 than to download directly from the customer.

There is a potential concern with storing data with a third party, but we encrypt the files and Amazon has decent security on top of that, so it seems ok. It's doesn't seem any worse than other hosting situations.

Overall, we're pretty happy with this setup.

G.ho.st - Global Hosted Operating System

G.ho.st – Home Page

Apparently this is implemented using Amazon S3 and EC2. Interesting idea, but pretty annoying how they maximize my browser window and hide the address and tool bars.

Spiceworks - Free Network Monitoring Software for Network Management

Spiceworks - Free Network Monitoring Software for Network Management

Thursday, October 11, 2007

Telnet now optional in Vista

Today I went to use telnet to test some server code I was working on. Except there's no telnet!?

A quick web search tells me it's not "turned on" by default, you have to go to:

Control Panel > Programs and Features > Turn Windows features on or off

Microsoft's justification for this is to decrease the "footprint" of Windows and to increase security. Neither really makes much sense. The telnet client is a tiny program and having it in a directory on the path versus some other "turned off" directory doesn't seem to change the "footprint" much. The code to turn it on and off is probably bigger than telnet itself. And I'm not sure how a telnet client is much of a security risk. Any attack that gets far enough into your system to use your telnet client is probably not going to be stopped by it being "turned off", especially since it could easily "turn it on" from the command line. (I can see leaving the telnet service turned off, but that's not what I'm talking about.)

I'm really tempted to do some Microsoft bashing at this point, but I'll restrain myself.

Wednesday, October 10, 2007

Ask 37signals: Is it really the number of features that matter? - (37signals)

Ask 37signals: Is it really the number of features that matter?

The idea of "editing" is interesting. It's a role I play with our application software, both in deciding which features to add, and in reviewing what we've done to try to ensure it's usable.

Of course, I still struggle with trying to pursue simplicity. Sales people, customers, customer support - they all think more features is the answer.

Monday, October 08, 2007

Chumby Coming Soon, But Not Here

Soon you'll be able to buy a Chumby, but only in the United States, not in Canada :-(

I think one of the things that helped the internet spread so fast was that it wasn't restricted like this. Although when I went looking on the internet for a tv show that I missed I found Fox limits access to shows to just the US. And Amazon's MP3 sales are US only. So the internet is not necessarily free of restrictions either.

Sunday, October 07, 2007

Ubuntu on Parallels on Mac

I just installed Ubuntu on my Mac under Parallels using these instructions:

How to install Ubuntu 7.04 in OS X using Parallels Desktop 3.0

It seemed to go smoothly. Of course, Ubuntu immediately wanted to download a pile of updates, but that worked ok.

The only problem is that you seem to have to manually turn on the networking within Ubuntu every time you boot it up. The instructions mentioned this, but I thought it was just during installation. There may be a fix, I haven't looked yet. I don't think it's a big deal because normally I just suspend the virtual machines rather than shutdown and restart them. Hmmm... that's assuming the network stays turned on after being suspended, I haven't actually tested that.

Parallels has a pre-installed Ubuntu virtual machine, but the downloads were very slow so I gave up. But I should be able to take the virtual machine I've now created and copy it from my Mac Mini to my Mac Book.

Aside from curiosity, one of my motivations for installing Ubuntu is to get back to working on porting Suneido to Linux. I should be able to do a lot of that natively under OS X but I assume I'll still want to work under Linux as well.

Friday, October 05, 2007

MinGW GCC Version 4

I see MinGW (Minimalist GNU for Windows) now has a preview of GCC 4. (It looks like it came out in August, but I didn't hear about it.) I've been waiting for this for a while.

I haven't looked at it yet, but I keep hoping that MinGW GCC will close the gap with commercial compilers so I can use it for Suneido.

Thursday, October 04, 2007

The Big Rewrite

ChadFowler.com The Big Rewrite

I've been involved in a number of "big rewrites" over the years. The Suneido project could be called a rewrite of a previous in-house project called C4, which in turn was a "rewrite" of a framework built on top of a personal information manager called Lucid. Our accounting applications are also the third rewrite. So it is possible to pull it off. And I would bet some of Chad's projects have succeeded in the end. But his points about the dangers and problems of big rewrites are definitely valid.

Joel Spolsky calls the big rewrite "the single worst strategic mistake that any software company can make"

In the end it comes down to Software is Hard

Tuesday, October 02, 2007

Digital Music Not Quite Here Again

When Apple announced DRM-free music on the iTunes store I thought I'd finally be able to buy music without the roundabout system of buying a physical cd and ripping it to get the digital version which is all I use these days. But unfortunately, their selection of DRM free music is small and doesn't seem to be growing very fast. It seems like every time I go to look for something it isn't available.

So when Amazon announced DRM-free MP3's I was hopeful but knew the real test would be selection. I checked for a few things and although not everything was available it seemed better than iTunes (and cheaper, at least for some things).

BUT when I actually went to buy something I found out it's United States only. Argh!

I found someone on the web saying they had managed to purchase from Canada by giving a fake address but that didn't work for me. Maybe because I was logged in and therefore it knew where I was.

Note: My reasons for wanting DRM-free music have nothing to do with piracy. I just don't want the headaches of DRM, especially when I use multiple computers and music players. It seems crazy how much resistance the music companies are putting up against DRM-free sales, when they've been selling DRM music (i.e. cd's) all along.

Patience, grasshopper.

Problems Embedding Google Maps in Blogger

I decided to add embedded Google Maps showing routes to some of my recent Sustainable Adventure posts but I had some problems.

First, it's quite hard to get the zoom/scale to come up properly on the embedded map. It would either be zoomed out too far or zoomed in too far. Presumably the zoom in the embedded map is related to the zoom when you ask for the link/embed code, but there doesn't seem to be a simple correlation. With trial and error, zooming in and out and resizing the window, I could usually get it more or less right.

In some cases it seemed to help to copy the link address, open a new tab/window and go to that address. I'm not sure why that would help. Maybe it was just coincidence that it happened to work after doing that.

One of my maps refused to show the route. The map itself would display (although missing one tile). Eventually, after playing with it and making a minor change to the route it started working. (I gave up on trying to get the zoom right on this one, I was happy enough to just get it to work.)

It's sad how much the success of an "expert user" comes down to trial and error and randomly poking things. The equivalent of kicking the machine. Isn't software supposed to be consistent and predictable? The problem is that if the software gets complex enough (as most software is this days) then it becomes impossible to control/know all the inputs and state, so it ends up seeming random, unpredictable, and inconsistent. Yuck!

I searched for other people having this problem but I didn't find much. Is it something I'm doing different? Issues related to Blogger? But lots of people use Blogger. Maybe I just didn't hit on the right search terms.

Thursday, September 27, 2007

More Mac & Parallels Printing Problems

This is a continuation of the saga I've posted about before.

Recap: I had moved my Epson R1800 printer to Firewire to free up the USB port. But then I found out Parallels doesn't virtualize Firewire.

So I moved the printer back to USB. Things seem ok. Some time later I go to print from Lightroom and I find that I'm back to the CUPS+Gutenprint printer driver which doesn't support all the printer features. I'm not sure how that happened since before the USB->Firewire switch it was using the Epson driver.

More time passes and I go to print from Windows under Parallels. The print job goes into the queue and gets stuck. I can't even delete the print job or the printer. I try disconnecting and reconnecting the USB port to the Parallels VM. I try rebooting Windows. I try rebooting the Mac. At some point during this thrashing my print job comes out and the printer deletes. I recreate the printer, thinking that it's now going to work, but my test gets stuck in the queue as before. I thrash some more, but can't seem to hit on whatever combination it was that released that first print job.

I search on the web and find various discussions of various problems more or less related to mine. One person says it takes 5 minutes for his print jobs to emerge. Maybe this was what happened with me - not anything I did while thrashing, just the amount of time I thrashed. It seems bizarre that it would take 5 minutes. What is it doing all that time? And why does it work after whatever it is doing?

I had gotten an error message about the Epson Status Monitor 3 (why 3?) and some of the discussions mentioned this. I tried disabling it and killing it etc. but it didn't seem to make much difference.

The strange thing is that I could swear I had the printer working from Windows via USB before I switched to Firewire. So why doesn't it work now? Changes to Parallels? Who knows.

Eventually I found Finally got Epson Photo R800 printer working in XP VM on the Parallels forums. (an R800 is the narrow carriage version of my R1800) It gave instructions on how to share the printer from OS X and then access it using Bonjour on Windows. I had run across references to this before but they all talked about using a generic postscript driver on Windows which is not what I wanted. I needed to use the Epson driver to access the printers features. But these instructions used the Epson driver.

The instructions were for the printer connected by USB but they mentioned they would likely work with Firewire. Since that's what I really wanted I thought I'd try it. Nope, couldn't get it to work. (no URI for Firewire device?) Back to USB. This time it worked.

The only place where I had problems with the instructions was with choosing the printer type. Bonjour only showed the generic postscript option. I had to choose Have Disk and then find and select my Epson driver.

Several hours later, I think I have the correct, functioning printer setup in both OS X and Windows. And even better, I think I should now be able to use the same method to access the printer from other networked Windows machines.

A few days ago someone suggested buying a MacBook for someone else. I said it probably wasn't a good idea because they'd want to run Windows programs. They said "I thought you could do that now?". Yeah, well, you can, but ... it can get ugly.

Monday, September 10, 2007

School

Paul Graham's latest post, News from the Front, was irrationally comforting.

I've often wondered if I should have taken a high school biology teacher's advice and applied to somewhere like MIT. My life obviously would have been very different, but also, obviously, it wouldn't have made me any smarter. Heck, I didn't even finish university, and I don't think that has hurt me any either. (Aside from my father's disappointment, and even he came around.)

On the other hand, I might have met different people by being somewhere more "active" than the middle of nowhere in the Saskatchewan prairies.

I don't have any regrets, but it's interesting to wonder about.

Monday, August 27, 2007

Holding a Program in One's Head

Another good essay by Paul Graham:

Holding a Program in One's Head

I'm not sure I totally agree with this part:
Don't have multiple people editing the same piece of code. You never understand other people's code as well as your own. No matter how thoroughly you've read it, you've only read it, not written it. So if a piece of code is written by multiple authors, none of them understand it as well as a single author would.
...
If you want to put several people to work on a project, divide it into components and give each to one person.
I agree you can never understand someone else's code as well as your own. Initially, this is probably good advice, although, even initially, pair programming can be worthwhile. But in the long run, multiple people are going to need to work on the same code. Of course, by this stage it's not in anyone's head anymore anyway.

Monday, August 20, 2007

Web Mail

I'm surprised that so many people put up with the annoying ads inserted into their email by Yahoo Mail and Microsoft Hotmail. When I started using Gmail I never really thought about this issue, but now I'm very thankful that Google doesn't do this kind of advertising.

I guess if you're just using your email for unimportant stuff it's not a big deal. But when you're using it to apply for jobs, or for your work, do you really want stuff like this on your messages?
Moody friends. Drama queens. Your life? Nope! - their life, your story. Play Sims Stories at Yahoo! Games.
or:
Windows Live Hotmail. Even hotter than before. Get a better look now.
I would guess one reason people don't object is that they don't see it when they're sending a message.

I realize this kind of "viral" marketing can be effective, and when Hotmail first did it, it was a clever thing to do. But nowadays, I just find it offensive. Of course, I find TV commercials offensive too and that won't stop them doing it.

In any case, I'm glad Google doesn't do it. They do some "viral" marketing by suggesting you invite people to join Gmail, but it's fairly unobtrusive. And the ads they show beside Gmail are easy to ignore as well. (And fairly easy to block if you really want to.) And these things are only shown to me, the person getting the free service, rather than to people I'm sending messages to, who shouldn't have to "pay" for my service by viewing ads!

Interestingly, Google, who made a big splash by offering 1 gb of Gmail storage when most people were offering measly amounts like 10mb, is now offering the least amount of storage - "only" 2.8 gb compared to unlimited for Yahoo and AOL, and 5 gb for Hotmail. more...

Tuesday, August 14, 2007

Jing

This looks interesting:

Jing Project: Visual conversation starts here. Mac or Windows.

Watch the video tour for a quick introduction.

It's from Techsmith who make Snagit and Camtasia - products we use.

Wednesday, August 08, 2007

Parallels Annoyance

I went to print from Windows running under Parallels on my Mac and it didn't work, although it had in the past. I tried the usual highly intelligent :-) problem solving technique of rebooting (in this case both Windows and the Mac) but it didn't help.

Eventually I remembered that I had switched the printer from USB to Firewire, since the printer (Epson R1800) had come with a Firewire cable and it freed up a USB port.

But ... Parallels doesn't virtualize Firewire, only USB. There are several workarounds but none of them are great since they don't let me use the specific features of the printer.

Now something else made sense. I had wondered why I had an HP postscript printer set up in Windows. I hadn't set it up myself so I assumed some application I had installed had done it, although that didn't make much sense. Now I realize Parallels must have created it as a way to print to Mac printers.

Another workaround is to print to PDF on Windows and then print the PDF from OS X - a bit tedious, but it works.

I may end up moving the printer back to USB, but that'll mean messing with the setup on OS X which I'd rather not have to do.

The reason I wanted to print from Windows was because I was using Canvas, a program that combines many of the capabilities of Photoshop and Illustrator (i.e. both vector and bitmap at the same time). I really like Canvas but it's been somewhat abandoned. The product was bought by ACD who haven't done much with it. There was/is a Mac version but not for Intel, so I'm still using the Windows version. There are rumors of a new Windows version but not Mac. Too bad.

Windows Live Writer

 I'm writing this using the Windows Live Writer beta. Tim Bray has mentioned it a few times in his ongoing blog.

I haven't really felt the need for a separate tool to write blog posts - doing it in the browser seems to work well enough, especially with the real-time spell check in Firefox. But I can't resist trying out new free software :-)

I added this screen shot to test inserting images. That's one area where Blogger can be a bit awkward.

[screenshot removed to allow publishing]

It also looks like you can add tables which I don't think Blogger allows (unless you edit the HTML directly).

Humorously, Live Writer shows "Firefox" as a spelling mistake. (but "Blogger" is ok)

So much for images. When I tried to publish it said "the weblog does not support image publishing". Blogger certainly allows images, so I would interpret that to mean that Live Writer's interface to Blogger doesn't handle images. It did offer to let me configure an FTP site but I wonder how "user friendly" that is for most people. I assume if you use Microsoft Live Spaces for your blog then you can publish images.

Friday, August 03, 2007

Alarming Development : Beautiful Code

Sadly, I would have to agree with this:

Alarming Development : Beautiful Code

But that won't stop me from reading the book, we can still dream of beautiful code :-)

Thursday, July 12, 2007

Tagging

I find tagging really attractive as an alternative to strictly hierarchical categories. My main uses of tagging are in del.icio.us, LibraryThing, and Lightroom. Picasa, that I used to use before Lightroom, also has a form of tagging, but it never seemed very "natural" to use it there, probably because it's not very well implemented. Flickr is another big user of tagging, but for some reason I've never gotten into using Flickr much.

Tagging, especially in sites like del.icio.us and Flickr, is often promoted as a "social" tool. I've never really used it that way, maybe because I'm not much of a "social" person :-) Of course, I've benefited to some extent from other people's tagging, but that's never been my main benefit. I primarily use del.icio.us to maintain my personal bookmarks.

As far as implementation, del.icio.us has a nice system, especially for web based software. It shows you all your tags, auto-suggests as you type, shows recommended tags, and popular tags. I have wondered why they "hide" the popular tags (ones other people have used) down at the bottom of the page. I guess this is to avoid the problem of people just applying the same tags as everyone else, which would somewhat defeat the purpose of having a wide variety of people applying tags.

LibraryThing's tagging could definitely be improved. You often have to type tags just from memory, with no auto-completion or suggestions. A system more like del.icio.us would be a lot nicer.

Lightroom has auto-completion and lets you drag and drop tags, but its unique feature (at least, I haven't seen it anywhere else) is its "implied tags". A common issue with tagging is what level of detail to tag at, and how many tags to apply. For example, do I tag with "Saskatoon", or "Saskatchewan", or "Canada", or "North America" or several of these. Similarly, do I tag with "pelican", "birds", or "animals". Ideally (for searching) you'd apply all the relevant tags, but that would make tagging a tedious process. Lightroom lets you create tags as children of other tags, so "Saskatoon" is a child of "Saskatchewan" which is a child of "Canada". You manually apply the most specific tag, e.g. "Saskatoon" and the parent tags are automatically "implied". So if I search for "Saskatchewan", I'll automatically get anything I tagged with "Saskatoon". Very nice. I would like the same feature in del.icio.us and LibraryThing.

On this same topic, I listened to a Long Now talk by Clay Shirky on my run this morning. He was talking about the problems of hierarchical categorization, how it is mainly a result of having to organize physical materials, and that tagging is better suited for digital information.

Also on basically the same topic, I am currently reading Everything is Miscellaneous by David Weinberger (who also co-wrote The Cluetrain Manifesto). It makes many of the same points.

So it seems a little incongruous that a hierarchical system can make tagging better. Of course, in a way it's just a shortcut for entering tags, it doesn't really alter the tagging. And I wonder if the hierarchical part is really essential. At first when I saw "implied tags" in Lightroom, I wasn't thinking of a hierarchy, I just thought that one tag could imply others. But that could be tricky to handle since it would allow loops and if not used "properly" could end up with a mess. But it would be interesting to try.

I keep thinking there should be some way to apply tagging to our business software, but so far I haven't come up with anything really appealing. We could allow tagging things like equipment, but I'm not sure whether that would be a big benefit. Maybe we should try it and see whether people like it. Maybe we could replace things like "Type" and "Role" with more general tagging.

Wednesday, July 11, 2007

Font Rendering!

Here's more than you probably wanted to know about font rendering and why Windows, Mac, Linux, and Adobe Reader are different.

Texts Rasterization Exposures

I found this via Joel Spolsky in connection with his earlier post:

Font smoothing, anti-aliasing, and sub-pixel rendering

I'm afraid most of the time I don't even notice these fine details. But I have struggled with text sizes and layout issues in Windows. And really bad rendering can be quite annoying - Open Office (on Windows) used to be pretty bad.

Tuesday, July 10, 2007

New MacBook


My new MacBook arrived today. I tried to buy one locally, but no one had the model I wanted (a black 13" with 2gb ram and 160gb drive). I decided I might as well order it from Apple myself rather than get a store to order it. I ordered Thursday afternoon and it arrived Tuesday morning. Pretty fast. I also ordered a spare battery and it arrived at the same time.

I've been thinking about getting a MacBook for quite a while. Working on my old Windows laptop the other day and finding it quite slow tipped me over the edge to order the Mac.

As I would expect from Apple, the experience of opening the box and setting it up was a delight. I really like the little touches like the charge indicator on the batteries and the magnetic power connector.

The only (minor) problem I had was that I couldn't enter the key for my wireless connection from the setup process. I had to choose "no network" and then configure it later. No big deal.

I still have some setup to do to get things the way I want (e.g. installing Parallels) but so far, so good.

Thursday, July 05, 2007

New Network Hard Drive

With both Mac and Windows machines on my home network I decided I wanted some shared storage that they could all access equally. I can access shared directories on the Windows machine from the Mac (haven't tried the reverse). But that means I have to turn on the Windows machine. And it doesn't give the Windows machine any backup storage.

I looked at various possibilities but looking for something that explicitly supported Mac narrowed the choices. I ended up with a 320gb Lacie ethernet disk mini ordered through Frontier PC. I hadn't dealt with them before but they looked like a reasonable choice for a Canadian supplier. There were no problems dealing with them and shortly I had my drive.

I plugged it into my wireless router and it "just worked". It even came with a network cable.

I had no problem accessing it from Windows and it wasn't much harder from the Mac. The problem with the Mac was trying to get it to connect at start-up. On Windows this is a simple matter of a checkbox, and from the web I understand Mac OS 9 was the same. But not anymore. I found various suggested solutions, none of them simple or easy. One problem was how to supply the user and password for accessing the drive. In the end I used Automator. The funny part was that for all it's fancy drag-and-drop graphical workflow, it all came down to getting the right cryptic url. In other words, Automator ended up being window dressing on a command line.

In case any one is tackling a similar problem, what I ended up with was a "Get Specified Servers" action followed by a "Connect to Servers" action. The url for the "Get Specified Servers" needed to be "afp://user:password@ipaddress/sharename" with the appropriate user, password, and numeric ip address.

The other trick is actually getting it to run when you log in. Again, Windows seems simpler (but maybe just because it's more familiar), you just put stuff in the Startup folder. On OS X you do it through System Preferences > Accounts > Login Items - not exactly easily discoverable.

Originally I had thought I would keep the master copies of my music and photo libraries on the shared drive so I could access it from any of my computers. But this didn't work too well. For example, if the network drive was disconnected for any reason (as it often was when I was first struggling to get the connect at log in to work) then Adobe Lightroom would recognize (rightly) that the files were not available. The problem was when the network drive was reconnected, Lightroom would start checking the availability of the files - roughly 10,000 of them, one at a time - not a speedy process!

So now I'm mainly using the network drive as a backup. My next plan is to come up with a way to sync my music and photos between the Mac and the network drive, possibly using ChronoSync which I saw recommended.

One feature that I don't have on the Lacie drive is the ability to run the SlimServer for my SqueezeBox. Currently I am running the server software on my Windows machine - but that means it has to be turned on if I want to play music. Theoretically it should be possible since the Lacie runs Linux and there is a Linux version of SlimServer, but I couldn't find anything on the web about how to do it. It does support UPnP A/V but unfortunately the SqueezeBox won't accept that. It's the usual trials and tribulations of trying to get things to work together.

Friday, June 08, 2007

Slow Code Again

Although I agree with Larry's view, at the same time I continue to be horrified by some of the code I come across. There's no question premature optimization is bad, but I have to think so is code that blatantly disregards and abuses speed issues.

For some time I have been meaning to look at the start up process for our application. There was no particular reason for this - no one had been complaining. But speed of start up is one of the first things people encounter and first impressions can be important. And we had had some complaints and found some problems with opening the help being slow, and some of the code is similar.

The first thing I looked at was database queries since they often dominate speed issues. I turned on the query tracing during startup and to my horror, ended up with 1800 lines of query trace! At first I thought that was 1800 queries, but there are multiple lines per query so it was actually only about 600 queries. It's a testament to the speed of computers and networks these days, that this doesn't even result in a noticeable slowdown. Although I think we did have one customer complain that after restarting their server, when all the users tried to log back in all at once, it was slow. I can understand why! Of course, we discounted it, thinking that, of course it'll be slow.

I have various ideas for reducing these startup queries, but I was able to cut them in half within a few minutes. Most of these queries are reading the menu tree for the application. At the same time it is checking for any menu options that are "hidden" (e.g. not purchased by that customer). To do this, the code was calling a general purpose permissions function which did queries on the permission table. But in this case all we needed to know was whether an option was hidden, which depended only on some configuration data in the code. We had a more specific function that only checked the configuration so switching to call this (didn't even need to change the arguments!) instantly eliminated almost half (about 300) of the queries.

I don't give this as an example of someone "screwing up". We all make mistakes or overlook things. The lesson that I take from it is that it's not enough to write some code and have it appear to work. You have to assume that there are always hidden problems and that you need to constantly be on the lookout for them. And constantly fighting entropy by fixing and refactoring.

See also my previous post on Slow Code

Bad Documentation

The easiest way to write software documentation is:
Date Field
Enter the date.

Rate Field
Enter the rate.
Needless to say this is very obviously useless. And it's actually worse than useless because any useful information is buried deeply in masses of this garbage. No one would actually write this kind of stuff, would they? Yes, they would and do. And in an amazing example of how we can justify justify anything to ourselves, writers of this kind of documentation actually appear to believe they have produced something useful. (Of course, if someone else had written it, that same person would have no problem recognizing it as useless.)

Of course, it's often thinly disguised. Here are some examples (from real documentation I was given yesterday):
In the Type field, choose the Type this record applies to.

Tab to the Fleet field. Choose the correct Fleet.

Tab to the Abbreviation field and enter the Abbreviation.
I'm sure you get the idea.

The problem is that to write useful documentation is orders of magnitude harder. It requires you to think. You need to decide who the documentation is for, what you hope to achieve with it, what things people need to be told, and just as importantly, what things they do not need to be told.

I think another factor is that people hate to leave "gaps". Rather than just describe the few fields that you actually have something worthwhile to say about, they feel they have to say something about every single field. And this is not just obsessive-compulsive people - almost everyone seems to feel uncomfortable about "leaving out" some fields.

Google Desktop versus Mac Spotlight versus Vista

One thing that frustrated me when I started using Google Desktop Search was that I would type my search, it would correctly identify and highlight the top result, but when I hit Enter it would open my browser and show me the search results there (instead of just running/opening the top result). Eventually I discovered you could change this in the Preferences "Launch Programs by Default" (instead of "Search by Default").

I guess I should mention that I use Desktop Search as much or more to launch programs rather than find files. My Windows Start menu has so many programs on it that it is a hassle to use.

Now I have the same hassle on my Mac. Spotlight finds the right result, but when I hit Enter it brings up a search window instead of running the top result. Unfortunately, so far I have not found a setting to change this. It hasn't been too annoying yet because I don't have as much software installed on my Mac so I don't need to use it as much.

As much as I like to dislike Microsoft, they appear to have got this right in Vista. The new search box defaults to running the top result when you hit Enter.

I wonder how Beagle on Linux works in this respect?

PS. I normally have my Google Desktop set to show on my Windows task bar, but today it was missing. I thought it must have crashed or something so I rebooted. But still no search box. I checked whether it was still set to run on startup but that looked ok. When I tried running it manually from the Start menu it displayed in "pop up" mode, but came up so fast it must have already been running. When I checked my preferences I found it was set to not show up. I'm pretty sure I never changed this, so I'm not sure what happened. Maybe some automatic update turned this off?

Tuesday, June 05, 2007

Chapters web site annoyance

Lately I've been ordering from Chapters more often than Amazon because they seem to have more books in stock and deliver them more quickly. (This is in Canada i.e. amazon.ca)

But one part of ordering from Chapters really bugs me. Their web site does not understand that billing address belongs to the credit card. I order books both for work and personally. The billing address for my work credit card is different from the billing address for my personal credit card. Amazon handles this automatically. But Chapters not only doesn't handle this automatically, it makes me type in the address every time I switch between business/personal. They store multiple shipping addresses and multiple credit cards and let me pick from them rather than type them in, but apparently they only store a single billing address.

I guess I could set up a second account but I've got enough different accounts to keep track of!

It scares me to wonder how many similar annoyances our applications have that we're either unaware of or are ignoring.

Stevey's Blog Rants: The Next Big Language

An interesting (and cynically humorous) article about programming languages:

Stevey's Blog Rants: The Next Big Language

Sunday, June 03, 2007

Adobe Lightroom

I continue to be impressed with Adobe Lightroom. Like Tim Bray, although I prefer open source software, I can't help but like Lightroom.

I was also impressed by The Adobe Photoshop Lightroom Book by Martin English. Usually I'm not that impressed with books on how to use software. They're either reference manuals or beginner's guides. This book is neither. It gives helpful advice, covers both basics and more advanced features, and is full of examples and annotated screen-shots. Lightroom was designed to be "simple to use". But it still has a ton of features, so "simple to use" means lots of non-obvious features - like things you can click on or drag. The book helped me discover these a lot faster than I would have on my own.

One of the reasons I like Lightroom is that there are lots of innovative user interface features. It's not "eye candy" - the UI is very subdued black and grey (after all what's important is your pictures, not the software) Even if you're not interested in digital photography, I'd recommend spending some time with the trial version just to see how the UI works, although you may not discover all the features in a short test. The UI has already inspired some improvements in Suneido.

Thursday, May 31, 2007

Windows GetSaveFileName Frustration

GetSaveFileName is a Windows API call that displays a standard save file dialog. It has an option to specify a default file name extension that will be added if the user doesn't type one. (lpstrDefExt in the OPENFILENAME struct)

For some reason, this wasn't working. I wasted a bunch of time checking all my code and searching for known problems on the internet with no success.

Then I tried it again and suddenly it worked! But I hadn't changed anything!

Aha! I had changed something - I had typed a different file name. The problem is that if the name you type (with no extension) exists, then it doesn't add the extension. (e.g. I was typing "test" and I had an existing file called "test", with no extension) I guess it assumes that you are "picking" the existing file, even though I was typing it, not picking it from a list.

MSDN contains a huge amount of documentation, but all it takes is one little missing detail and you're in trouble.

Monday, May 28, 2007

LINA portable applications

It's not released yet, but LINA looks like a really interesting project that enables you to run the same binaries on Linux, Windows, and Mac with native look and feel.

Sunday, May 13, 2007

Inside the Machine

I've been reading Inside the Machine by Jon Stokes - An Illustrated Introduction to Microprocessors and Computer Architecture. It looks at Intel, IBM Power PC, and Motorola chips (i.e the ones used in PC's and Mac's), including the latest Core 2 Duo chips.

I'm not much of a hardware person these days, but it was pretty interesting to read about the techniques used to push performance. Apart from bigger caches, I had wondered how modern CPU's used the 100's of millions of transistors they now contain. (A Core 2 Duo has 291 million transistors as compared to the original Pentium's 3 million.)

Considering the subject, the book is well written and easy to read. If you're at all interested in this area, I'd recommend it.

Google Sketchup

Google Sketchup is a free 3D design tool (there's also a Pro version that costs money). I haven't got a particular use for it, but I thought I'd check it out. Here is the results of my first 10 minutes with it:
You can put your Sketchup models onto Google Earth.