Monday, December 29, 2014

Just a Minute

I bought a new iMac Retina 5K. (amazing display!) So Shelley gets my previous four year old iMac. (Replacing her even more ancient iMac.) Personally I prefer to set up new machines from scratch rather than migrate potential junk and problems from the old machine. But for Shelley I knew that would be a big hassle so I used Apple's Migration Assistant.

Shelley was out for the afternoon so I started the migration. It's simple to use, you start it up on both machines and indicate which is the source and which is the destination.

The estimated time remaining went up and down, but was around 4 hours. That seemed relatively accurate since after about 4 hours it said it had a minute left. That was good timing since Shelley had just arrived home.

But an hour later it still said it had a minute left. Crap! I started searching on the web and found lots of other people with the same problem. It's been an issue for years, but I didn't find any official response from Apple. For some people it seemed if they left it long enough it would eventually finish. But other people waited e.g. 24 hours and it still didn't finish. I could abort it at any time and leave Shelley with the old computer, but she was ok with waiting overnight.

I could tell it was still doing something because our internal network was slow. In fact, the first clue that it might have finished was that the network suddenly got a lot faster. It ended up taking about another 4 hours for that "last minute". It reminded me of the 90-90 rule in software development that "The first 90 percent of the code accounts for the first 90 percent of the development time. The remaining 10 percent of the code accounts for the other 90 percent of the development time."

I understand that estimating completion times is difficult, and progress indicators are infamous for stalling at the end. But "a minute" is several orders of magnitude different from 4 hours. Surely Apple could do better, maybe obsess over the migration experience as well as the un-boxing experience.

If they really can't improve the time estimation, then give some visibility to the process. For example, show a list of "things" to be copied and check them off. Sticking at one minute remaining looks like it's hung up and I suspect a lot of people cause additional problems because they kill the process and then tried to recover from a half copied machine.

Other than this hiccup the migration seems to have been successful. But instead of being the hero for giving Shelley a newer, bigger, faster computer, I ended being the indirect cause of "breaking" her Microsoft Office. It needed the product key to reactivate it on the new computer and that seems to be long gone. The key would have been on the physical package which probably got thrown out sometime over the years. And worse, Microsoft now wants you to pay a monthly fee to use Office, rather than just a one time purchase. On top of which, they haven't updated Office for Mac since 2011. Sigh. Home tech support can be a thankless job!

PS. With Migration Assistant you have a choice of copying from the old machine, or copying from a Time Machine backup. I chose to copy from the old machine just in case the backup didn't include everything. Some of what I found on the web seems to indicate that copying from a Time Machine backup doesn't have the same problem.

Tuesday, May 27, 2014

Java 8 Performance

I was just looking at some stats on the average time our customers' servers take to run our application test suite.

I noticed on a particular day the times dropped from an average of 240 seconds to an average of 200 seconds. (These are averages from about 240 customer sites.) The numbers are generally quite stable so I was curious what changed.

I discovered that was the day we updated everyone to the Java 8 JRE so it looks like that's the reason for the improvement. Assuming that's the correct explanation that's a pretty nice upgrade!

It made sense that it was Java related since customers running the older cSuneido did not show any improvement that day.

Note: jSuneido is still compiled for Java 7, this improvement would just be from the runtime.

Monday, May 19, 2014

Portable Tests

With two implementations of Suneido (the original C++ cSuneido and the newer Java jSuneido) I've ended up with three sets of overlapping tests - in each of the implementations plus in the Suneido standard library. And as I play with implementing Suneido in Go I find myself creating yet another set of tests.

Obviously this is not ideal. Apart from the duplication, each version of the tests has better or worse coverage of different areas depending on where I had issues with the implementation. Ideally, I'd like to run the same complete set of tests everywhere, and if I added a test case it would be included everywhere, not just in one of the versions.

One option would be to use something like Fit or Fitnesse. But that would still require writing code (for fixtures and "slim" interfaces) and it would mean accepting a third party dependency which in turn depends on Java.

I figured the simplest thing would be to have the test cases in text files and to write a test runner for each of the versions.

But what format should I use for the test cases? I realized that I could use a format that was easy to read with the Suneido lexical scanner. Any implementation of Suneido has to have this, and it's generally one of the first things I implement. Using the scanner made it easy to handle quoted strings and to ignore comments and whitespace.

I implemented a test runner in Suneido code first, and designed the format to keep the parsing simple. Here is an example:


1, 1, 2 // i.e. assert 1 + 1 == 2


"abc" "b"
"abc", "x", false // this should not match

"^bar" // ^ should match after a newline

An '@' followed by a name precedes a list of test cases for the named test "fixture". Each version has to implement each of the fixtures, but these are simple and I already have equivalent code in the existing tests.

Normally each line of values is a test case. Commas between values are optional, but newlines are ignored after a comma to allow splitting a test case over several lines.

After the Suneido code version it was straightforward to implement a version in Go. Java and C++ should also be simple.

I still want to run these tests as part of the existing automated testing, but that's easy to do by writing a test (e.g. in JUnit) that calls the test runner for the portable tests.

A remaining question is where to put the test files. Expecting them to be in the current directory or a subdirectory is easiest, but then each version will have its own copy and I'd have to keep them in sync. It makes more sense to have a single copy somewhere, but then I need some way to point each version at that central location. One option would be an environment variable but that can be problematic. Instead I decided I'd put a text file at the root of each project that would contain the directory path to the tests. (And if that doesn't fit, each implementation is free to handle this differently.)

My main concern with this was tests with a lot of different cases, where you'd use data driven tests. (like regular expressions) In other areas what I probably should have is more of a BDD (Behavior-driven development) style of tests that would form a kind of specification for Suneido. To keep this portable it would make sense to use the JBehave style that separates the specification from the implementation.

Wednesday, May 07, 2014

Go: When nil isn't nil

Go is a fairly simple language. But it's still complex enough to have some oddities. I recently ran into one of them.

Lets say you have a nil pointer and you assign it to an interface:

var p *T = nil
intfc interface{} = p

I expected intfc to now be nil, but it's not.

The explanation (it's even in the FAQ) is in terms of the implementation. An interface contains a pointer and a type. When we assign a nil pointer the interface type is set. And an interface is only nil if both the pointer and the type are "empty".

I understand the explanation and it's not hard to work around once you're aware of it. If you want to return an interface that compares equal to nil you just have to make sure you don't assign a nil pointer to it.

But I don't find the explanation very satisfying. 

First, justifying some external behavior by explaining how you happened to implement it seems wrong. (Although to be fair, maybe the explanation is intended to be in terms of what an interface means, not really how it's implemented.)

Second, the explanation doesn't explain why they chose to make it work this way. Granted, it's simple because the nil check is just for a zeroed value. But it doesn't seem like it would be much harder for the compiler to just check for a zeroed pointer and ignore the type. It seems like this would avoid the unexpected behavior with no real loss of functionality.

I did some searching, but the only justification I could find is that a nil pointer in an interface can still satisfy the interface and you can still call methods on it. Which is fine, but what does that have to do with whether the interface value compares equal to nil? I guess it would introduce a new oddity in that you'd have two kinds of nil interfaces only one of which you could call methods on.

The other issue is that (afaik) you can't determine if an interface holds a nil pointer without using reflection.

It's not a big deal, and I'm not hung up on it, it just seems odd. As far as languages are concerned, I would say Go has relatively few sharp corners to get caught on.

Sunday, May 04, 2014

Go Suneido Spike

In software, a spike is some programming to investigate options or answer questions or to test design ideas. The term came from XP (extreme programming) and is used in agile and Scrum.

As a way to learn Go I've been investigating what it would be like to use it to implement Suneido. I've implemented various bits and pieces like lexical scanning, memory mapped file access, decimal floating point numbers, string concatenation, and hash tables.

In the Java implementation of Suneido I was able to leverage the Java virtual machine. In Go (as in the original C++ version of Suneido) I would need to write my own bytecode interpreter. To investigate this, I did an end to end spike from lexing to parsing to code generation to byte code interpreter. I even implemented a basic REPL (Read, Eval, Print Loop). All it currently handles are simple expressions like 100 + 50 - 25 or "hello" $ "world". But it's implemented with the right structure to flesh out into the full language. (The code is on Github if you're interested.)

I've written about 4000 lines of Go code so far. Not enough to be an expert by any means, but enough that I don't feel like a newbie anymore. It's been mostly low level code, I haven't done anything with goroutines and channels yet.

It's been remarkably painless. I think it helps to have a C/C++ background. Of all the languages I've dabbled in in recent years, Go has been the easiest to pick up and be productive in. That's partly due to the small, simple language, and partly due to the good default tools. The Java language wasn't hard to pick up, but the libraries and tools are definitely more complex.

The fast compiles are a key feature. After working in Java I think it would be hard to give this up. Considering the classic compile and link approach, running Go code seems just as fast as running Java code.

I almost find Go's simplicity a little disappointing. I love reading about complex languages like C++ and Scala and all the wild and wonderful things you can do with them. What software geek wouldn't love turing complete templates and type systems! Go doesn't have those kinds of things, and therefore doesn't have intricate books about them.

But as far as something I actually want to use, and not just read about - in that respect Go is great.

Saturday, May 03, 2014

A Go Hash Map

I was a little disappointed to discover that the built in Go map doesn't allow interfaces as keys. Considering that interfaces are the way to do anything dynamic in Go, and that dynamic stuff often uses maps, it seems a little odd.

To act as a hash table key, you need equals and hash code methods. But it's easy to define an interface for that and require that keys implement it, similar to how Sort requires a container to implement Len, Less, and Swap.

I looked around to see what's out there and found a few options, but none really excited me. And I'm still learning Go, so it made more sense to implement it myself.

My first thought was to port my hash map code from cSuneido. But I wondered what Go's map was like. The code is straightforward but it uses an interesting approach that I haven't encountered before. It's a variant of separate chaining with each slot in the hash table being a bucket that can hold a small number of entries (e.g. 8). Additional overflow buckets can be chained together. In many hash table designs, collisions are a nuisance to be tolerated, but this design almost embraces them, by making the table 8 times smaller you assume collisions.

Buckets holding a number of entries are also better for cache locality than a linked list.

Another interesting feature is that the buckets have an additional byte per entry that holds the high byte of the hash code of the key. This helps in searching because if this piece of the hash code doesn't match then you can avoid comparing keys (which is cache unfriendly and also slow if keys are large or complex).

This design also works well for small tables since you can use a single bucket, which basically reduces it to a small array with linear searching, which is what you want for a few entries.

So I implemented this design in Go, following the C code fairly closely, except that I didn't implement the incremental resizing. It might be worthwhile in some situations, but it makes the code more complex (especially iteration) and probably makes the resizing slightly slower in total, albeit amortized. The lack of incremental resizing hasn't been a noticeable issue in cSuneido.

Have a look, it's about 200 lines of Go.

The next issue was what to use for hashing strings. Go has standard packages for hashing but they require converting to a byte array which requires allocation and copying. (Go 1.3 has an optimization for this, but only for the built in map.) So again, I wrote my own version, following the approach in hash/fnv.

It seems reasonable. The main drawback comes from Go's lack of generics - it has to work in terms of interface{} (the equivalent of Java Object or C/C++ void*) so you have to cast everything that comes out of it, reminiscent of Java prior to generics. Another minor awkwardness is that you can't use tbl[key] syntax like built in maps.

Another hash table approach which would be a natural fit with Go would be to use a growable slice for each entry in the table (rather than a list of buckets). This would avoid the space overhead from chain links and from partially full buckets, at the cost of the slice itself (a pointer and two int's), plus more individual allocations.

Related interesting reading:

Saturday, April 19, 2014

CharMatcher in Go

The Guava library for Java has a CharMatcher that provides a way of composing character matching predicates plus functions that use those predicates.

For example, AnyOf("\r\n").Negate() creates a CharMatcher that matches any character except return or newline. You can then do things like cm.IndexIn(str) or cm.CountIn(str)

Some of this you can do directly with Go libraries. The unicode package provides some standard "matchers" like IsDigit and IsLetter. And the strings package has functions like IndexFunc and TrimFunc that take predicates.

But they don't do everything that CharMatcher does, so as an exercise I thought I'd try implementing something like CharMatcher in Go.

My first approach was basically an object-oriented style like I'd use in Java with CharMatch as an interface.

But when I started adding more matchers it seemed excessive to have to define three pieces for each - a struct, a match method for the struct, and a function to construct the struct.

My next thought was to get rid of the interface and have a generic struct containing a matching function as a member. This uses closures to store the matcher parameters rather than structs.

I was stuck on the idea of a struct so that I could define methods like Negate and IndexIn on it. Then I realized that in Go I could make CharMatch just a function, and still define methods on it. That led to this version:

I used InRange for DIGIT and AnyOf for SPACE as examples, these could also use the unicode package equivalents.

IndexIn is an example of a method that just wraps a strings package function, whereas CountIn has no strings equivalent.

The tests give some examples of how it's used.

One potential drawback of this approach is that the matcher parameters are "buried" in closures. This makes it impossible to do any processing or optimization (like the Guava CharMatcher precomputed method). For example, Is('a').Or(Is('b')) could be folded into AnyOf('ab'). If you wanted to do this, I think you'd have to go back to using structs (like my first approach).

Friday, April 18, 2014

More Concatenation

I did some quick benchmarks in Go. (I really like how that ability is part of the standard tools.) Here are some results. As with any benchmark, don't take them as exact. Changing the parameters of the benchmarks gives varying numbers but with the same overall result.

Buffer37322 ns/op51104 B/op10 allocs/op
Array49456 ns/op53632 B/op17 allocs/op
Linked122558 ns/op54047 B/op1010 allocs/op
Merge311005 ns/op323552 B/op1998 allocs/op
Naive2225408 ns/op5371680 B/op999 allocs/op

An "op" in this case was appending 10 characters, 1000 times.

Some observations:
  • Naive concatenation is indeed bad, both in speed and memory
  • Almost anything is much better than the naive approach
  • As expected, a buffer is the best in both speed and memory
  • An array of substrings (without any merging) does surprisingly well
  • For an immutable option, a linked list isn't too bad
  • A merge tree was not such a good idea

Thursday, April 17, 2014

Overlooking the Simple Solution

For no particular reason I've been thinking about concatenating strings. (I know, get a life, but this is at least part of my life.) It was partly prompted by thinking about Go and how it might work to implement different facets of Suneido. 

Most languages warn you about the poor performance of building a large string by repeated concatenation. They recommend using a StringBuilder (Java) or its equivalent. 

But Suneido explicitly optimizes repeated concatenation internally so the programmer doesn't have to worry about when to concatenate and when to switch to "building". 

In cSuneido (the original C++ implementation) concatenation just creates a linked list of substrings, deferring the actual allocation and copying. 

Originally, I ported that approach to jSuneido (the Java version). But I cut a few corners that I thought were safe to cut. That came back to haunt me. Rather than fix the problems I looked for a better solution. (There are some issues when the linked list gets too big.) I considered some kind of merge tree but decided that was more complex than necessary.

Instead of a linked list I used an array of pieces which was expanded as needed. If the array got big it would merge small pieces. That has been working fine. 

Analyzing the problem from a more theoretical basis I figured the worst approach is repeated naive concatenation and the best (?) is something like StringBuilder that uses a buffer that expands e.g. by doubling in size. My array approach is somewhere in between, as is a merge tree. 

At that point it struck me that I could just use a StringBuilder rather than my array approach. Duh!

That eliminated about a hundred lines of code and ran about 20% faster. 

I feel stupid for not thinking of this sooner. It seems so obvious (now!) But I was stuck on the idea of deferring concatenating. 

And now I feel even more stupid because I just searched my own blog and found that I did consider a buffer approach but decided it required too much copying. (String Building Internals) Using a StringBuilder does mean copying into it and then eventually copying the result back out. But considering that Java compiles string concatenation into using StringBuilder, the overhead can't be too big. (Go, on the other hand, compiles string concatenation into calls to a runtime function that allocates a new string and copies directly into it, without any intermediate buffer.)
One advantage of my original linked list approach is that everything is immutable and therefore threadsafe without locking. That's attractive. Both the array and the StringBuilder are mutable and require locking. That's not a big deal on Java since the locks will almost always be uncontested and therefore very fast. And the locking is at the leaves of the call tree so they should be safe from issues like deadlock. 

But in Go, locking is less acceptable and an immutable solution would be nice. I have an idea for an immutable merge tree approach - stay tuned :-)

Saturday, April 12, 2014

Java 8, Eclipse Kepler, and Infinitest

When I updated my Mac to Java 8, Infinitest quit working. I've been hoping it would update and start working but it didn't happen.

I went looking and I found that the Eclipse Marketplace version of Infinitest is several years old. Had the project been abandoned?

I found an Infinitest web site which linked to the Github page. The readme there gave an update site of and when I installed/updated from that site Infinitest started working again.

The web page gives an update site of which appears to point to the same version (5.1.110). I'm not sure which is the "correct" choice.

Now I'm just waiting for Proguard to be updated for Java 8.

Wednesday, April 09, 2014

Lexing in Go

One of my standard exercises when I'm looking at a new language is to implement the lexical scanner for Suneido. I've done this in quite a variety of languages - C++, Java, C#, D, and now Go. The scanner is simple, and generally the implementations are similar.

The Go code is longer (in lines) than most of the other implementations for a couple of reasons. One is that Go doesn't have ?: and you have to use if-else. Another is that gofmt puts enum type constants one per line.

Go only supports simple numeric constants to implement enums. That works ok, but it's awkward for debugging because if you print them, you just get a number.

One interesting thing about the Go implementation is that it handles unicode without really worrying about it too much.

I debated over whether to return the results as a struct or just as multiple return values. But that really depends on which is easier for the calling code.

There is a good talk by Rob Pike about Lexical Scanning in Go. If you're not interested enough to watch the video, you can skim the slides. I didn't need to use his fancier concurrent state machine design but it's an interesting example of using Go. You can see a full implementation in the Go template package.

Here's the code: (or view it on GitHub)

Saturday, April 05, 2014

Hamcrest Style Matchers in Go

I've grown quite accustomed to using Hamcrest style matchers in Java. So I looked for something similar in Go. I found, but the last activity was three years ago and when I tried to use it I got errors. (Go has changed) I also found Gomega but for some reason it didn't attract me.

I started to write my own, then stopped myself from getting side tracked from what I was doing. But I kept thinking about it, and ended up writing something very simple.

What I came up with allows you to write assertions like:

Assert(t).That(..., Equals(expected))

and to add more information to the error messages with:

Assert(t).That(..., Equals(expected).Comment("..."))

Where t is the *testing.T that Go's testing framework supplies.

Equals returns a tester function (a closure capturing the expected value). A tester returns "" on success, or else an error message.

That is a method that takes a value of any type (i.e. interface{}) and a tester function, and calls the tester with the value. If the tester returns an error message it calls t.Error

Comment is a method on a tester (taking advantage of Go's ability to define methods on any type, not just on classes). It returns a new tester (a closure capturing the message) that passes success ("") through unchanged, but appends it's message to any error messages.

Taking advantage of Go interfaces, I didn't make the code depend on Go's testing.T type. Instead I defined my own interface with a single Error method (matching the one in testing.T) and made Assert wrap that. So it will work with anything that has a suitable Error method. I didn't have any particular usage in mind for that, but it's a nice way to avoid dependencies.

Initially I wrote Equals using "==". That worked for simple values but not for things like slices. I ended up using reflect.DeepEqual which seems to work. I'm not sure if this is the best approach. Obviously it won't work for things like less than or greater than.

One of the problems I had was that errors would be reported as always occurring on the same line of my hamcrest.go file where I called Error rather than the relevant line in my test. This is a more general problem whenever tests use any kind of helper function that ends up calling Error. Maybe that's not the normal style, but I tend to do it a lot. I found the code where it does this in the decorate method in testing.go but there doesn't appear to be any way to override it. It would be easy enough to modify testing.go, but I'm not sure how I'd get "go test" to use it, short of building a custom version of Go which doesn't seem like a good solution. Ideally Go test would report the portion of the call stack within my code.

I ended up just adding my own reporting. Rather than hard coding how far back in the call stack to go (as testing.go does), I looked for the first call after the testing framework, i.e. the actual top level test function. So errors have the correct location on the end:

hamcrest.go:38: expected 5790 but got 579 {dbldisp_test.go:16}

Obviously, this is not a complete implementation of Hamcrest style matchers. It was a good exercise to explore some of Go's features like interfaces, function value, and closures. I've been using it to write tests but I'm not sure if I'll do more on it and use it longer term or find something else to use.

UPDATE: Something I forgot to mention is that when I'm using this I'm doing:

import . "hamcrest"

The dot allows you to use the exported names from hamcrest (e.g. Assert and Equals) without requiring the package name prefix. This is discouraged, but in this case it seems preferable to writing:

hamcrest.Assert(t).That(..., hamcrest.Equals(expected))

Here's the code on GitHub:

Thursday, April 03, 2014

Decimal Floating Point Arithmetic

If you're doing things like financial calculations, you have to be careful about using conventional binary floating point because it can't represent decimal fractions exactly.

One approach is to use "scaled" numbers, e.g. represent your dollar amount in cents or hundredths of cents so you are always working in integers. And it requires big integers, 32 bits is only about 9 decimal digits and the 52 bits of double floats is about 15. You really need 64 bit integers which are about 19 digits. (10 bits ~ 3 decimal digits) But that still doesn't give you the ability to deal with general purpose floating point.

So Suneido has always had a decimal floating point numeric type. (Internally, for performance, it also uses plain integers when possible.) Another advantage of a decimal type is that it is simple and quick to convert to and from string form.

Back when I first wrote Suneido (~ 15 years ago) there were no 64 bit integers in C++ compilers ("long" was 32 bits) and 32 bits wasn't sufficient precision. So I had to use multiple values to hold the coefficient. Since I had to use multiple integers anyway, to simplify overflow (by using 32 bit ints for intermediate results) I used four 16 bit ints, each one holding four decimal digits for an overall precision of 16 decimal digits. (To simplify "shifting" the exponent is in terms of the 16 bit ints, i.e. it jumps 4 decimals at a time. This "granularity" causes problems with precision. Depending on the exponent, in the worst case you get as few as 10 decimal digits of precision.)

Of course, having to use multiple integers and trying to get decent performance complicated the code, especially division. I won't claim it's the greatest code, but nevertheless it's worked reasonably well for a long time.

When I implemented jSuneido, I used Java's BigDecimal. Because of the different implementation there were a few minor differences, but they were mostly edge cases that didn't matter in practical usage. (Unfortunately I had made the external dump format for numbers mirror cSuneido's internal representation so it's a little awkward converting to and from BigDecimals.)

Recently, we've started to run into issues with using API's that deal with 64 bit integers, because we don't have enough precision to store them. In jSuneido it would be easy to bump up the BigDecimal precision to 20 digits. In theory I could do the same with cSuneido, but unfortunately, the code is fairly specific to the current precision. e.g. loops are unrolled. The thought of making this change is not pleasant :-(

The other problem is that some of the code assumes that you can convert to and from 64 bit integers losslessly. But 20 decimal digits won't always fit in a 64 bit integer.

Now that we have 64 bit integer types, the obvious answer seems to be to use a 64 bit integer for the coefficient. This will be faster and simpler than using multiple small integers, and probably faster than BigDecimal since it handles arbitrary precision. And if I used the same approach in both cSuneido and jSuneido this would ensure consistent results.

Since I'm in the middle of playing with Go, I figured I'd try writing a Go version first. It should be relatively easy to port to C++ and Java if I decide to.

It took me a couple of days to write it. One of the challenges is detecting overflow when calculating with 64 bit integers, since you don't have a larger type to use for intermediate calculations. Hacker's Delight provided a few tips for this. Another useful reference was General Decimal Arithmetic.

It's about 500 lines for add, subtract, multiply, divide, and conversion to and from strings. (That's about half the size of the cSuneido C++ code.) Since I'm new to Go, it may not be the most idiomatic code. And I have only done basic testing and refactoring. "float10" isn't the greatest name. Maybe "decimal" or even "dec"? (in keeping with Go's predilection for short names) I'm open to suggestions...

I chose to pass and return by value rather than by pointer. I'm not sure if this is the best choice for a 10 byte struct. Would it be faster to pass by pointer? Returning by pointer forces heap allocation for intermediate results which isn't ideal. Pass and return by value is a good fit for immutable values which are my preference.

Go makes it really easy to benchmark so I checked the speed of division (the slowest operation). Micro-benchmarks are always dubious, but it gave me a rough idea. It showed about 400 ns per divide (on my iMac). I don't have comparable benchmarks for cSuneido or jSuneido, but that seems pretty good. I'm pretty sure it's better than cSuneido. (Of course, it's nowhere near as fast as native binary floating point done in hardware. The same benchmark with float64 gives about 7 ns per divide, although this is so small that it's even less likely to be accurate.)

As far as evaluating Go, so far I like it. Of course, it's well suited to low level code like this. Sublime Text + GoSublime works well. (The only issues have been with learning Sublime since I haven't used it much.) I might have broken out the debugger a couple of times if I'd been working in Java or C++, but I get the impression the debugger story for Go isn't that great. I managed easily enough with old school prints :-) I plan to give Eclipse + GoClipse a try at some point since I'm already familiar with Eclipse.

The code is embedded below but it's probably easier to read (or download) on GitHub.

Wednesday, April 02, 2014

TortoiseSVN + TortoiseHg Problem

I use Subversion (SVN) for cSuneido (for historical reasons) and Mercurial (Hg) for jSuneido, both on SourceForge (again for historical reasons).

On Windows I use TortoiseSVN (1.8.5) and TortoiseHg (2.11.2) with Pageant (part of PuTTY, but supplied with TortoiseHg) so I don't have to type a password all the time. This combination has worked well for a long time.

I came into work this morning and TortoiseSVN kept popping up a Plink dialog asking for my password. That's what Pageant is supposed to avoid, especially since SourceForge needs an SSH key, not a password.

TortoiseHg was working fine, which meant Pageant was ok.

I used TortoiseSVN a few days ago. As far as I can recall I didn't change anything since then. But possibly I updated it. There are so many updates going by these days that it's hard to remember.

I searched the web but didn't find anything that seemed to be related.

I tried rebooting. I tried changing my path to put TortoiseHg and TortoiseSVN in the opposite order. Didn't help.

After some digging I found TortoiseHg was using older versions of TortoisePlink and pageant (both from 2012) whereas TortoiseSVN had a new TortoisePlink (from 2014). I wasn't sure it was a good idea, but I tried replacing the new TortoisePlink with the old one, thinking that maybe it needed to match the version of pageant.

That worked! Or at least appears to work. (I even rebooted to make sure the problem wouldn't come back.) It's probably going to break next time I update TortoiseSVN, and I'll probably forget the fix, but at least I'll have this blog post to jog my memory :-) And hopefully in the long run this will get sorted out. I can't be the only person running both. I'm not sure why TortoiseHg has such old versions. There seem to have been similar version issues a few years ago.

Monday, March 31, 2014

The Go Programming Language

Go has been around for a while. I looked at it when it first came out but didn't get too excited, partly because there weren't any good books about it, and that's how I like to investigate a language.

Recently I've been looking at it again. I've read two books - Programming in Go by Mark Summerfield and The Go Programming Language Phrasebook by David Chisnall. An Introduction to Programming in Go by Caleb Doxsey is available for free. These are decent, but so far I haven't found a Go book that I'd call great. There's a lot of material on the web which is useful, but I still prefer a good book. Also, most of the material is introductory - I haven't found much expert level material.

Here are my thoughts on Go (1.2), in no particular order. These are personal opinions and biased by thinking in terms of implementing Suneido, because that's the best basis for comparison that I have.

+ safe
Pointers, but no pointer arithmetic. Array bounds are checked. In theory, crash proof. (Like Java, unlike C/C++) But if you need it, there's the unsafe package.

+ garbage collection
Not as mature or high performance as Java, but steadily improving. And because not everything has to be on the heap (as in Java) there is less pressure on GC.

+ available on main platforms (Linux, Mac, Windows)

+ optional semicolons
Minor, but it's one less thing to type and to clutter up the code, and definitely my preference. I was always disappointed that D chose to keep semicolons.

+ capitalization for public / private
I'm biased since I use a similar approach in Suneido. The only (minor) thing I don't like about it is that you can't use capitalization to differentiate type names as is normal in C++ or Java.

+ goroutines and channels
An attractive alternative to threads and locks.

 no generics
Not so critical for application code, but this makes it really hard for library writers to provide things like alternate data structures. There's a built-in map that is generic, but if you want an ordered map, or a concurrent map or an immutable map you can't make it generic. Obviously, this is not an easy thing to add and it adds a lot of complexity to the language, but to me it's a drawback.

? built-in maps (hash tables)
Maps are obviously a good thing to have, but making them built-in to the language seems like an admission that the language itself is not sufficient to write such things (due to lack of generics and operator overloading)

+ fast compiles
This was one of the explicit goals of Go.

+ simple, standard build system
No complicated make or ant files.

+ native executables (no runtime required like Java or .NET)
I don't mind having a VM like Java or .NET, but it is an extra hassle. Even with .NET, which is part of Windows, you can run into versioning issues. And having to distribute a 90mb JRE is a pain.

+ variable types can be inferred from initialization
e.g. var x = 123 or just x := 123
This avoids having to duplicate long types e.g. m := map[long_type]another_long_type

+ multiple return values, multiple assignment
e.g. a,b = fn() or x,y = y,x
This means you don't have to create special data structures (and in Java, allocate them on the heap) just to return multiple values.

 no ? : ternary operator
In conjunction with mandatory braces, this turns thing like "return x < y ? x : y" into 5 lines of code. Supposedly ?: is hard to understand, personally I've never found it a problem. If you don't want the extra operators you can make if-then-else an expression like Scala and write "return if (x < y) x else y". However you write it, it's still a branch you have to understand.

+ standalone functions
Not just methods as in Java. (Java 8 now has lambdas, but still doesn't allow top-level functions.)

+ functions are first class values
You can pass around references to functions, put them in variables or data structures, etc.

+ closures
Function literals are automatically closures that can reference variables in their environment, even if the function outlasts the environment.

+ slices
A slice is a safe reference to a portion of an array. Unlike C/C++ pointers, a slice includes length and capacity. This is a big improvement over bare pointers. (D has similar slices.) See Arrays, slices (and strings)

 no immutable or read-only or const (other than const scalars)
I'm not surprised at this, but it's too bad. Immutability is very useful, especially with concurrency. Interestingly, Go strings are immutable (so at some level people see the benefit), but nothing else. One unfortunate result of this is that conversions between strings and arrays of bytes require copying, even though the underlying data is identical. This is another issue, like generics, that can add considerable complexity to the language and type system. And even where it is attempted, like C++ const, it often isn't ideal. The D language does have immutable and pure. But I would love to see immutable data and pure functions. (You can write your own immutable data structures, but again the lack of generics and operator overloading makes them clumsy.)

 no concurrent containers
I realize the Go way is to use channels and goroutines for concurrency, but I think there are still going to be times when a concurrent container would be useful and give better performance. And without generics, it's hard for libraries to provide this.

+ not everything on the heap (as opposed to Java)
You can embed one struct inside another without a pointer and a separate heap allocation. And you can pass or return a struct or put it in a variable by value, again meaning it doesn't have to be on the heap.

? no classes or inheritance
This is one of the more unconventional aspects of Go. Not having written much Go code it's hard to judge this. My feeling is that Go provides good alternatives. The only drawback may be porting existing code that uses classes and inheritance.

+ no separate primitives and boxed versions
This is a pain point in Java and a definite performance issue when implementing something like Suneido.

+ type declarations
Similar to a C/C++ typedef except that a Go type declaration introduces a new type. This is useful to create short name for a complex type, or to prevent mixing incompatible units e.g. celsius and fahrenheit.

+ can define methods on scalars (e.g. int or string), not just on classes
For example, you could declare Celsius as int, and then define a ToFahrenheit method on it. (This is a bit like the Common Lisp Object System)

+ "duck" typed interfaces
Types satisfy interfaces implicitly (by having the required methods), they do not have to explicitly declare what interfaces they satisfy. Very nice.

? no function overloading
Minor. It means more name variations, but keeps the language simpler.

? no operator overloading
Mostly not an issue. Makes it more awkward to use library provided data types.

+ flexible switch statement
Not limited to integer constants like C.

? no assert
This is explained in the FAQ but I'm not sure I agree. But it's minor because it's easy enough to write your own.

? error return values instead of exceptions, no try-catch-finally
This is different than what I'm used to but I haven't written enough Go code to really evaluate it. Go's panic and recover are sufficient to implement Suneido's exceptions. See the FAQ, Error handling and Go, and Defer, Panic, and Recover

+ can integrate with C

+ strings are arrays of bytes (generally but not necessarily UTF8), not wide characters
The D language also takes this approach. It makes sense to me, and also fits with how Suneido works. One of the advantages is that it reduces conversions when reading UTF8 files. It also reduces the size in memory when dealing with mostly ASCII. See Strings, bytes, runes and characters in Go

+ decent standard libraries and an increasing number of third party ones

+ standard testing framework
Basic but reasonable.

+ good standard tools
Nice to have standard tools for building, formatting, embedded documentation, profiling, race detection, etc.

 limited IDE and refactoring support
Although there are Eclipse and IntelliJ plugins available, they are fairly basic and don't include much refactoring support. I've been using the Sublime Text plugin. It sounds like the primary Go developers aren't IDE users so this area has lagged a bit. There is some limited refactoring ability in the go fmt tool. This area will likely improve over time.

+ standard formatting
Minor, but it's nice to sidestep any formatting debates. I like that they chose tabs for indenting, since that's always been my preference. But I'm sure it bugs people who prefer spaces.

Wednesday, March 26, 2014

B-tree Range Estimation

One of the trickier issues to deal with in Suneido is when the database query optimizer picks what seems like a poor strategy. One of the difficulties is that these issues are usually data dependant - so you need to do the debugging on a large database.

Suneido's query optimizer is fairly straightforward, but even on relatively simple queries there are a lot of possible strategies and it can be quite hard to understand the end results.

We had an issue recently with jSuneido where the sort could be satisfied by an index, but the strategy it chose was reading by a different index and then sorting (with a temporary index).

It wasn't hard to discover that it thought reading by the other index was enough faster to justify the temporary index. The problem was it was wrong.

It came down to the the btree range estimation. It was estimating the same key range quite differently for the two indexes, where they should have been exactly the same.

One of the important choices in query optimization is which indexes to use. To help with this, Suneido uses the index B-trees to estimate how "selective" a given range of index keys is, i.e. what fraction of the records does a range include. Because the query optimizer looks at lots of possible strategies (combinatorial explosion) you want this estimation to be fast. You don't want to read the entire index.

Strangely, I used the identical approach on both cSuneido and jSuneido, but they ended up choosing different strategies for the same data.

The code was doing a lookup on the "from" and "to" keys, estimating the position of the key from the path down the tree. Because a single path down the tree only looks at a few nodes out of a potentially large number, it has to assume that the rest of the tree is balanced and node sizes are "average".

It didn't take long to discover a rather blatent bug in both jSuneido and cSuneido. I was averaging the node sizes of the "from" search and the "to" search in order to get a better idea of the average node size on each level. But I wasn't adjusting the position within the node. For example, if the "from" search went through position 5 of a node of size 10, and the "to" search went through position 25 of a node of size 30, when I average the node sizes it now thought the "to" search went through position 25 of a node of size 20 - obviously wrong. On the positive side, if the node sizes don't vary too much then it doesn't have a big affect.

One reason why this bug hadn't surfaced yet is that the exact estimates aren't that important since the optimizer is just comparing whether different indexes are better or worse, not what the absolute numbers are.

That bug was easy enough to fix, and I came up with a much simpler way to write the code, but I still wasn't getting very accurate results. I started running the numbers through a spreadsheet to try to see what was wrong with my calculations.

After banging my head against the wall for a long time I finally realized that the problem was that the trees were actually not very balanced. A B-tree only guarantees that the tree is balanced in terms of height i.e. all the branches are the same length. It does not guarantee that each branch of the tree leads to the same number of keys. Most B-tree allow nodes to vary between half empty and full. When a node gets full it is split into two half full nodes.

However, if you add keys in order, this leads to most nodes only being half full. So Suneido uses a common optimization to split unevenly if a node gets full by adding on the end. In the "worst" case (in terms of balance) this leads to the root of the tree having a left branch that is full, and a right branch that has a single key. That is exactly what my data looked like in the "bad" case, but I hadn't realized the significance.

Side note - this optimization applies more to B-trees that use fixed node sizes, like cSuneido. jSuneido's append-only database uses variable sized nodes, so space wastage isn't an issue. However, you still want to keep the branching high to minimize the height of the tree.

My solution was for the estimation code to look at the sizes of all the nodes on the first and second level of the tree (root and one level below), rather than assume they were all the same average size. This doesn't handle the tree being unbalanced below there, but the lower levels of the tree have less of an effect on the final result. Finally I got some results that were at least in the right ballpark, and good enough so that jSuneido now chooses the "correct" strategy.

Maybe I've been spoiled by Java, but the unsafe nature of C++ scares me. I made a stupid bug in the cSuneido changes that wrote one past the end of an array. This passed all the unit tests, but gave some strange random errors later on. In this case it wasn't hard to find the problem, but it makes me wonder.

As usual the code is in version control on SourceForge.

Thursday, March 20, 2014

A Curious Error

I got this curious error from Evernote on the Mac:

I'm pretty sure that small amount of text doesn't exceed 100mb :-)

Maybe it thought I was trying to insert something huge, if so, it wasn't anything I did intentionally. (I think I hit the TAB key, but when I tried that again it didn't give any errors.)

Tuesday, March 18, 2014

Eclipse Crashing

I got back from six weeks travel and fired up Eclipse (4.3 Kepler on Mac OS X). Not surprisingly, there were updates so I said ok to install them and restarted when prompted. Except instead of restarting it crashed :-(  I tried restarting several more times but it kept crashing.

Rather than waste a lot of time figuring out what went wrong it was quicker to rename my old copy of Eclipse, download a new copy, and then import my plugins from the old install. (One of the nice things about Eclipse is that it's just a folder, and you can trivially have multiple installs.)

The only other thing I had to do was check which Java Eclipse was using (Eclipse > Preferences > Java > Installed JREs). I found it only listed an old version so I used the Search button to find the new one, made it the default, quit from Eclipse, and deleted the old version of Java to ensure nothing would use it.

You'd think that leaving a computer turned off would ensure that everything would work when you got back (barring hardware failures). But interrupting the modern day firehose of updates unfortunately often leads to problems. (Although in theory it shouldn't.)

Sunday, January 26, 2014

Java 8

I just finished reading Java SE 8 for the Really Impatient by Cay S. Horstmann. I'd recommend it as an introduction to Java 8 features. I've also read Cay's Scala for the Impatient, and used his Core Java books for reference. No doubt there will be other Java 8 books arriving soon. Pragmatic Programmers has Functional Programming in Java: Harnessing the Power of Java 8 Lambda Expressions in beta. And Manning has Java 8 Lambdas in Action in Early Access. But unless I'm in a rush to learn about something, I don't usually go for beta or early access versions because I'd rather wait and read the final product.

Java 8 has seemed so far in the future that I haven't been thinking about it much. But I see it's supposed to be released in March, which isn't really that far off.

If I was starting a JVM project from scratch, I'd probably lean towards Scala. But for maintaining and improving the jSuneido Java code, I'm looking forward to Java 8, especially lambdas.

The current release version of Eclipse doesn't support Java 8, but there are early access releases available. Presumably support will be included in the next version of Eclipse. Intellij also has early support for Java 8.

Netbeans may be the best bet right now. Version 7.4 has Java 8 support, and the beta for Netbeans 8 is available.

Thursday, January 23, 2014

jSuneido Network Bug


Our installations run a scheduler as a client. (On jSuneido this could probably just be a thread on the server, but cSuneido is single threaded.)

On Windows, when we shut down the server, the scheduler will exit. On Linux the scheduler would hang.

I narrowed it down to a simple test case (on OS X, which seemed to behave like Linux)
  1. start the server
  2. start a client REPL
  3. from the client,execute: ServerEval("Exit")
  4. server exits
  5. client hangs (on Linux but not on Windows)
Strangely, if you killed the server (with Ctrl+C) then the client would get an exception instead of hanging.

I assumed that the client was blocking when it tried to read the response from the ServerEval (which never came because the server had terminated). And that on Linux, for some reason, it didn't recognize the socket was closed when it was blocked reading, although that didn't make a lot of sense.

I ran the client in the debugger and when it was hung, I paused it to see where it was. Sure enough it was in the socket read.

I searched the web trying to find anything related. There wasn't much, which was surprising. Most problems like this are documented by someone. 

But I did notice some of the code examples were checking the return value from and I wasn't. The documentation said read returns -1 when "the channel has reached end-of-stream". That didn't sound like channel closed to me, but it seemed like I should be checking it anyway.

And that was the problem. It wasn't actually blocking on the read, the read was returning -1, but I was looping until I read all the data, and that's what was hanging it.

To verify, I restored the code, made it hang, and checked the CPU usage - sure enough the client Java was at 100% (because it was in a tight loop calling over and over).

I'm still not sure why killing the server behaves differently from exiting normally. I guess the socket gets closed differently. (i.e. gracefully or not)

In hindsight it seems like an obvious bug in my code (not checking the return value). I think what threw me off was that it worked fine in Windows. Java is usually pretty good at hiding operating system differences, but not in this case.

Tuesday, January 21, 2014

Building cSuneido with Visual Studio 2013

It's not that long ago since I switched to building cSuneido with VS 2012, but after listening to some Channel 9 podcasts about enhancements to the C++ compiler I figured I should give the new version a try.

Note: Confusingly, Visual Studio 2013 = Visual C++ version 12 - off by one error :-)

The version I'm using is the free Visual Studio Express 2013 for Windows Desktop.

I started a new solution and projects rather than convert / update the existing ones so I wouldn't bring over any undesired garbage. Of course, starting from scratch meant running into some of the same errors as other times, but at least it's a little fresher in my mind this time.

One advantage of VS 2013 is that it came with support for building XP compatible applications. Originally they didn't have this in VS 2012 and it was added later (probably after the outcry from developers). I'd prefer to drop support for XP but we still have a lot of customers running it. We're working on getting them to upgrade.

I fixed a few more warnings in the code that the new compiler found, but other than that it went pretty smoothly.

I haven't measured the speed of the resulting executable, but from running the tests etc. it doesn't seem like there's significant difference.

Assuming we don't find any problems we'll switch to using this version for our customers.

As usual, the code changes and the Visual Studio solution and projects are in version control on SourceForge. If you try building it, let me know how it goes.

Wednesday, January 08, 2014

A User Interface Detail

I recently read Microinteractions. (recommended) It gives lots of small examples of user interface/experience, many from Little Big Details. Which made me think of one detail from Suneido.

In Suneido's IDE, the LibraryView code editor has tabs, like a lot of editors and IDE's and other software like browsers. Suneido uses the Windows tab control - pretty standard.

The Windows tab control lets you put an icon on the tab, again, nothing new.

We also have a right-click context menu on the tab with the usual option for closing the tab. But that's awkward if you want to close multiple tabs. I wanted a "close" button on the tabs, like you see in a lot of places, eg. Chrome

But the Windows tab control doesn't have an easy way to do that (AFAIK), so I "cheated" and just switched the icon when you moused over the tab. (Note: You still have to click on the actual close button, clicking anywhere else on the tab just selects it.)

Although I did it that way for expediency, it turned out to have a few nice benefits. One is that the close button doesn't take up space on every tab. Eclipse only shows the close button on the current selected tab, but it still reserves the space for it on all the other tabs:

But the benefit that I really like is that you can close a series of contiguous tabs by repeatedly clicking the close button of the leftmost one without moving the mouse. Whereas when the close button is on the right hand side of variable length tabs, you have to move the mouse to a new position after closing each tab. Chrome has mostly fixed length tabs, but they shrink/expand when required which still throws off the positioning. Admittedly, in some cases you could use the right-click context menu to close all the tabs, or close all except the current one. But this is simpler and also works to close N contiguous tabs by simply clicking N times in the same spot.

The downside of not always showing the close button is that it's not as discover-able. But in this case I don't think that's a big deal.

Obviously this is a pretty minor issue, but it surprises me that other tab controls (AFAIK) haven't used this approach.