Wednesday, February 15, 2023

Go Telemetry

There has been a big debate recently over the proposal to add telemetry to Go. 

It started with Russ Cox's multi-part Transparent Telemetry

I read the proposal and it seemed well thought out. I could understand the need to get more information about how Go was actually being used. Collecting a few anonymous counters seemed relatively benign compared to the "big" (i.e. invasive) data being collected by seemingly everyone these days.

Naively, I didn't foresee the big push back in the discussion at telemetry in the Go toolchain which was eventually locked after 506 comments. (518 thumbs down to 118 thumbs up)

I must admit I have a few qualms myself because it's Google. Go is it's own team, and I would say they have a good track record, but it's still Google paying their salaries and running their servers.

One point I missed until reading the discussion was that they would "temporarily" collect traffic logs with IP addresses. Supposedly this data would just be discarded, but how long until someone at Google decides they could "use" this data?

I think part of the knee jerk reaction was because it's a compiler. That seems wrong somehow. It's a bit reminiscent of the Ken Thompson hack. We may not like it, but these days we accept that Facebook and Apple etc. are going to track us. VS Code is one of the most popular editors, and it sends large amounts of telemetry. (I keep meaning to switch to VSCodium) I used to always opt in to sending telemetry because I wanted to help the developers. Nowadays I opt out of everything I can because it seems that most of it is just spying.

I don't have a lot to add to the debate. But I do have an idea/proposal that might help. How about if the telemetry was collected and published by a third party, someone with a vested interest in not abusing it. Perhaps someone like the Electronic Frontier Foundation. The proposal was already saying the data would be public. The Go team could access it from the public source just like anyone else. The Go team would still control the actual telemetry code, but since they wouldn't be collecting the data, it would be pointless to "sneak in" extra information.

It's a bit sad that it's almost impossible to collect legitimate data because so many big companies have abused data collection.

Monday, February 13, 2023

A Go Generics Technique

Say you want to make a generic hash map in Go. One of the questions is whether hash and equals should be methods on the key type, or whether they should be functions supplied when creating an instance. In general Go recommends passing functions. One of the advantages of this approach is that the functions can be closures which then have access to context.

In my cases I had a mix of uses. In several cases I already had hash and equals methods (from a previous incarnation). In several other cases I need context so closures would be better.

After a certain amount of head scratching I came up with a way to handle both.

Normally a generic hash map would be parameterized by key and value types. I added a third "helper" type. This type supplies the hash and equals functions. I created two helpers - one that calls methods on the key, and one that stores references to supplied hash and equals functions.

To use this the helper type you need an instance. A neat Go trick is that the helper that calls the methods can be struct{} - a valid type that is zero size, so no space overhead.

Getting the type constraints right took some experimentation. The key and value types do not have any constraints (any). The helper is parameterized by the key type. The helper that calls methods obviously is constrained by an interface with those methods. It confused me that the constraints that would normally be on the key type get moved to the helper, but I guess that makes sense because it is specific helpers that have requirements.

PS. Go has a built-in map type that is "generic" (but predates generics in the language). The problem is that it only works with types that have built-in hash and equals. If you need to write your own hash and equals, you can't use it.

Wednesday, February 01, 2023

Fuzz Testing Database Queries

The gSuneido database was ready for production, as far as I could tell. All our tests passed.

But there was a steady trickle of bugs showing up. Sure, they were obscure cases, but they were showing up in actual application code, so they weren't that obscure.

Every time I'd think it was ready to deploy further, another bug would show up.

I finally decided I had to do something to flush out these obscure bugs. Waiting for them to show up in production was not the way to go.

I had thought about trying to fuzz test database queries before, but it always seemed too hard. Even if I could figure out a way to generate random but legal queries, how would I check the results? Then I realized I could compare jSuneido and gSuneido. It wasn't a perfect test since they were roughly the same design and therefore could have matching bugs. But I had made enough changes and improvements to gSuneido that they were now significantly different. And even if it wasn't a perfect test, it was a heck of a lot better than nothing.

I puzzled over how to generate valid random queries. And what kind of data to use. I started with a simple set of four tables with obvious joins between them. The joins seemed to be the most constrained element, so I started with simply picking randomly from a fixed list of joins.

e.g. cus join ivc

I represented the query as a tree of nested objects and wrote a function to randomly pick a sub-expression branch.

unions are added by randomly picking a branch and replacing it with: branch union branch

e.g. (cus join ivc) union (cus join ivc)

leftjoins are added by randomly replacing some of the join's.

Then where, rename, extend, and remove (project) are added at random spots.

(((cus where ck = 12) join ivc) union ((cus leftjoin ivc) extend x1)) remove c1

Finally it optionally adds a sort.

The resulting queries aren't necessarily realistic, but they seemed to cover most of the variations.

It's a little more ad-hoc than I originally hoped. You could generate random queries from the grammar, and they would be valid syntax, but queries also have to have valid semantics, and that isn't represented in the grammar.

First I just parsed and optimized the queries, no execution. This soon triggered some assertion failures which uncovered a few bugs.

The next step was to compare the actual results between gSuneido and jSuneido. I decided the simplest approach was to calculate a checksum of the result and output queries and their result checksums to a text file. Then I could re-run those queries on jSuneido and compare the checksums.

In the end I found about a dozen bugs. Several of them were in jSuneido, which is a little surprising since it has been in production with thousands of users for about 10 years.

The problem with finding a bug through fuzzing, is that the inputs it finds are often messy. Some fuzzing systems will try to simplify the inputs for you. I just did it manually, starting the debugging of each failure by simplifying the query as much as possible while still retaining the failure. Fuzzing can make finding bugs easier, but it doesn't really help with fixing them. And the more you fuzz, the more obscure the bugs get. On the other hand, by the end I got pretty good at tracking them down, inserting prints or assertions or using the debugger.

I added all the failed queries to a "corpus" that gets run every time, along with new random queries as a defense against regressions. In theory they would get generated randomly again, but that could potentially take a long time.

I called it success when I ran about 500,000 queries (roughly two hours of processing) with no differences. The plan is to add this to our continuous test system and fuzz for an hour or so per night when we have spare compute power. That should prevent regressions and possibly find even more obscure bugs.

I'm pretty happy with how this turned out. It only took me a few days work to write the fuzzing system (not counting the time to actually fix the bugs), and it flushed out a bunch of bugs that I'm sure would have haunted me for a long time otherwise.

Of course, in hindsight I should have done this 20 years ago, or at least 10 years ago when I wrote a second implementation (jSuneido). Sometimes I'm a little slow! But better now than never.

Maybe the counterpart to Eric Raymond's "given enough eyeballs, all bugs are shallow" is "given enough fuzzing, all bugs are visible". (Of course, it's never "all", but you get the idea.)

Thursday, January 26, 2023

AI

I used to think AI (when it finally "arrived") would help compensate for the many irrationalities of Homo sapiens. What can I say, I grew up on a steady diet of science fiction.

And now AI is arriving in the form of ChatGPT. And it successfully duplicates the failings of human brains. I could already get all kinds of biases, misconceptions, confabulations, and outright lies from humans, amplified by the internet. Now I can get it from AI too.

Even in the specialized area of programming assistance, I'm somewhat skeptical. Tabnine is a good, helpful tool. It claims it wrote 15% of my code in the last month. But when I review the highlights, they're one or two short lines of code. Not much 'I' in that AI. To me, the challenge in coding is writing coherent well organized code. Copying and pasting snippets will never achieve that. It seems to me it's just going to encourage even more boilerplate. Why not, when it's generated for you. Think how many lines a day you can "write". Most code is crap (including mine), and that's what these models are trained on. Even if you wanted to train it on "good" code, where do you find that? Who is going to judge it?

Perhaps this is just a temporary situation and AI will solve these problems. I'm not optimistic for the near future because it seems inherent in the current approach.

Although, there is perhaps some cause for hope: https://writings.stephenwolfram.com/2023/01/wolframalpha-as-the-way-to-bring-computational-knowledge-superpowers-to-chatgpt/

Meanwhile, after billions of dollars worth of research, self driving cars are currently a flop. I suspect that's a temporary setback.

Interesting times.

Wednesday, January 04, 2023

Three Weeks for Three Tweaks

It started with reports of slow database queries. At first I didn't pay too much attention, after all, some queries are slow. We're dealing with quite large data (in a small business sense, not in Google terms) and customers are choosing what to select on and how to sort.

It progressed to the same query being slow sometimes and fast other times. That seemed a little more suspicious, but there were still a lot of factors that might have explained it.

Finally someone came up with a simple example where the same query, ran on the same data, was 1000 times slower when you ran it slightly differently. That was bad news for me, since it definitely meant there was a problem with the query optimization. Why was it picking such a slow strategy sometimes, when there was an obviously better strategy. Especially when it was such an extreme difference. The optimization shouldn't have to be very accurate when there is a 1000 times difference!

It didn't turn out to be a bug, the code was working as designed. It was just a particular scenario that wasn't handled well by the current design.

After 20 years, the easy improvements have been made and I'm into the realm of diminishing returns. I ended up needing to "tweak" three aspects of the code in order to fix the issue. And all three were required before I could see if it was going to work. TL;DR - it did solve the problem.

One way of looking at the issue was that the optimization was getting caught by a local minimum. It wasn't smart enough to use a sub-optimal strategy in one spot in order to allow a better strategy overall. (It doesn't explore every possible strategy, for a complex query that would be far to slow.)

None of the changes were difficult, but they were all somewhat invasive, requiring changes to all the query operations.

Background

In case you actually want to try to follow what I'm talking about, here's a little background on how Suneido implements queries.

A query like:

(table1 join table2) union table3

gets parsed into a tree like:

where the leaves are database tables and the other nodes are operations. Each node has one or more strategies for execution, plus some parameters like choice of index.

Optimization starts with calling the optimize method on the root operation (union in this case) which then calls optimize on each of its sub-queries.

Execution works similarly by calling the get method on the root operation, which then "pulls" data from its sub-queries.

Tweak #1

Lets say we have a query like:  

table1 join table2 where id=1

During optimization, query operations can ask their children for their estimated number of rows. In this case join would get the table size from table1 e.g. 1000 and one (1) from the where since it's selecting on a unique id. But that information isn't sufficient to estimate the fan-out of the join.

So the first change I made was to return the "population" count in addition to the row count. The where could return 1 for the result count and e.g. 10,000 for the population i.e. table2 size. This allows the join to estimate a more realistic fan-out of 1:10.

Tweak #2

The next problem is when a temporary index is required, there are two costs - the "fixed" cost of building the index, and the "variable" cost of reading from it. But until now these had been combined into a single "cost". Other operations usually have only a variable cost.

Continuing the example from Problem #1, now the join can estimate the fan-out is 1:10, it can estimate it's only going to read 10 records from table2. So the variable cost will be low, but we'll still incur the entire fixed cost. So in this case we want to minimize the fixed cost. But to do that, I needed to separate the cost into fixed and variable parts everywhere.

Tweak #3

The remaining problem was that join didn't have any way to tell the sub-query that only a small part of the result would be read. To address this, I added a "fraction" argument to allow queries to tell their sub-queries how much of the result they estimated they would read.

In the running example, 10 rows from 10,000 would be a fraction of .001 Where before it would choose a strategy with e.g. a fixed cost of 1000 and a variable cost of 1000 (total 2000) over one with a fixed cost of 0 and a variable cost of 5000 (total 5000). Now the 5000 be multiplied by .001 giving 5, which is obviously preferable to 2000.

This allowed the optimization to avoid the local minimum.

Conclusion

I'm not sure how common the problem scenario is. It was only because we ran into such an extreme case (1000 times slower) that I got motivated to investigate. Perhaps less extreme cases also occur and will be improved by these changes. As far as I can reason, the changes should not make any cases worse.

Saturday, December 17, 2022

Go Tip

Lets say you have a function:

func ShouldNotReachHere() {
    panic("should not reach here")
}

And you try to use it like:

func Fn(x int) int {
    switch x {
    case 1:
        ... return ...
    case 2:
        ... return ...
    default:
        ShouldNotReachHere()
    }
}

Unfortunately, that won't work. You'll get a "missing return" error when you try to compile. That's because the compiler doesn't know that ShouldNotReachHere never returns. (Actually, it does know that when it's compiling ShouldNotReachHere, but it doesn't keep track of that information.) C++ has [[noreturn]] to specify that but currently Go doesn't have an equivalent.

You could add a dummy return statement but I find that misleading since it will never actually return anything.

What I tend to do is add a dummy return type to ShouldNotReachHere (it's not critical what type since you're not actually using it).

func ShouldNotReachHere() int {
    panic("should not reach here")
}

and then use it like:

panic(ShouldNotReachHere())

If that was the only way you used it, then ShouldNotReachHere could just be a string constant. But defining it this way means you're not forced to use panic. (And if it was a function returning the string, then you could forget to use panic.)

Sunday, December 04, 2022

Blogging, Suneido, and Programming Languages

Another blogger I follow has announced they are stopping blogging. It's sad (to me) how the world has shifted away from blogging. Of course, there are still active blogs, but far fewer than there were. I thought maybe the pandemic would give people time to do more blogging but it didn't seem to work that way. Of course, it always surprised me that smart productive busy people would take the time to write good blog posts.

I'm no exception. I post less and less often, other than photographs on my non-work blog. But even that is a shift from my past longer text posts. Readers have dwindled too. (Not that I ever had a big audience.)

But looking back through some of my blog posts, I'm glad I wrote them even if hardly anyone read them. It's a bit like a sporadic diary, but more thoughtfully written than a personal diary would be. I started blogging in 2005. It would have been interesting to go back farther but even in 2005 the tech world was a different place than today.

Thinking about this led me back here. I was writing a document for my staff, and some of it seemed of slightly more general interest so I thought I'd write about it.

Here's roughly when the versions of Suneido went into "production" (used by customers)

  • 2000 cSuneido (C++)
  • 2010 jSuneido (Java)
  • 2020 gSuneido client (Go)
  • 2022 gSuneido server
  • 2022 suneido.js (browser client)

Two data points doesn't mean much, but it's interesting that it was roughly 10 years between the major versions. The concurrent release of suneido.js is because another programmer did the development. (I started it, but developing both gSuneido and suneido.js at the same time was too much.)

All three implementation languages (C++, Java, Go) were in their early stages when I started using them. I was lucky that all went on to become mainstream. Some of the other languages I considered, such as C#/.Net and Kotlin have also been successful. Scala and D not as much, although they're still around.

C++ is a fascinating language. But it's too complex and it's unsafe. Life is too short. (gSuneido does have some unsafe code to interface with Windows and it has been the source of the hardest bugs.)

Java has a world class runtime with a great garbage collector and JIT compiler. But the language itself is not really a good fit for low level systems programming. gSuneido, with a simple byte code interpreter and no JIT still manages to outperform jSuneido. If Project Valhalla ever gets released that will help. But it's been "coming soon" since 2014.

Go felt like a bit of a gamble at the time. The garbage collector wasn't very good and neither was performance. But the language was a good fit for implementing Suneido. And it seemed like Google was committed to it. Although I wish it was a little less of a one-company language.

One of the things that bugged me about Go was its lack of generics. But now that it has them, I don't really use them that much. The people that resisted generics would say "I told you so". But it did simplify certain kinds of code and I'm still glad to have it when I need it. 

I wish Go had better support for immutability. C++ and Java aren't much better but at least they have a little bit. Immutability (and pure functions) are one of the best tools to handle concurrency, and concurrency is supposed to be Go's strong point. I think there's potential for a conventional mainstream language with strong immutability. I realize there are functional languages with strong immutability, but so far they're not really mainstream.

The big thing these days is Rust, but Suneido is garbage collected, and unless you want to write your own garbage collector (no thanks, been there, done that) the implementation language also needs to be garbage collected.

As far as the Suneido language itself, I think it has stood the test of time reasonably well. It was never intended to be especially novel or original, although at the time object-oriented programming and dynamic languages were leading edge. There have been a few minor adjustments to the language, a bit of syntactic sugar here and there, but for the most part code written 20 years ago would still run today. There are a few things I'd change but overall, when I program in Suneido, there's not a lot I wish was different.

Wednesday, November 02, 2022

The Value of Fuzz Testing

Coincidentally after my Go Fuzz post yesterday, today I see:

Why Did the OpenSSL Punycode Vulnerability Happen

Buffer overruns aren't a problem with Go (unless you're using unsafe) but the lesson still applies. I should spend more time writing fuzz tests. Especially when the Go tools make it easy.

Tuesday, November 01, 2022

Go Fuzz

Go 1.18 included a new fuzz testing facility. (overshadowed by generics.) I haven't used it a lot but it has been helpful in a few cases. For example, I used it to test gSuneido's regular expression code and found a number of obscure bugs.

A few days ago, one of our customers got a runtime bounds error from parsing a date. This code has been running in production for years without seeing this error so presumably it was something obscure. The date parsing code is a little complicated because dates can be written many different ways. The error log didn't have the bad input, but it did have a call stack, so it probably wouldn't have been too hard to find the issue manually. But I was curious whether fuzzing would find it. And if there was one bug, maybe there were more.

To keep it simple, I didn't check for correct results, so the test was just looking for panics.

One of the things that should be simple, but I found confusing, was how to actually run fuzz tests. I'm not sure if it's the simplest, but this seems to work:

go test -fuzz=FuzzParseDate -run=FuzzParseDate

Within a few seconds the fuzz test found an input that would cause a panic.

"0A0A0A0A0A0A0A0A0A00000000"

That's probably not what the customer entered, but it was the same bug, and easily fixed once I could recreate it.

I let it run until it was no longer finding "interesting" inputs which took a few minutes. It didn't find any more bugs.

Ideally the test would be checking for correct output, not just lack of panics. But that requires a second implementation, which I don't have. Sometimes you can round-trip e.g. encode/decode but that wouldn't work in this case.

Monday, April 18, 2022

Git + Windows + Parallels + make + msys sh + Go

Git recently made a security fix. Unfortunately, it broke things for a lot of people, including me. Just in case anyone else has a similar obscure setup (and for my own notes), here's how I solved my issue.

My configuration is a bit off the beaten path.

  • I am working on a Mac
  • I have my git repos in Dropbox
  • I use Parallels to build and test on Windows
  • I use make to run my go build
  • make is using msys sh as its shell

As of 1.18 Go includes vcs info in builds. So when I ran a Go build on Windows, it would fail with:

# cd x:\gsuneido; git status --porcelain
fatal: unsafe repository ('//Mac/Dropbox/gsuneido' is owned by someone else)
To add an exception for this directory, call:
        git config --global --add safe.directory '%(prefix)///Mac/Dropbox/gsuneido'
error obtaining VCS status: exit status 128
        Use -buildvcs=false to disable VCS stamping.

Adding -buildvcs=false did get the builds working, but it seemed ugly to disable a Go feature to work around a Git issue. It also didn't help if I wanted to do other Git commands.

I struggled with adding the safe.directory. I wasn't sure if %(prefix) was literal or was a variable. I also wasn't sure about whether it should be forward slashes or back slashes and how many (a perpetual issue on Windows). And I wasn't sure about the quoting. Eventually, I just edited my .gitconfig with a text editor.

Here's what worked:

[safe]
	directory = %(prefix)///Mac/Dropbox/gsuneido

Now I could do git commands.

But my build still failed with the same error!?

make is using msys sh as its shell. And sure enough, from within sh, I was back to the same error. git config --global --list didn't show my safe directory. That turned out to be because my home directory from inside sh was different. If I ran git config --global --add safe.directory from within sh, then it created another .gitconfig in /home/andrew. Now I could run git commands from sh, and now my build works.

I'm a little nervous about having multiple .gitconfig files, one on Mac, one for Windows, and one for sh on Windows but I don't have anything critical in there, so hopefully it'll be ok.

I'm all for security fixes, and I try to keep up to date, but it's definitely frustrating when it breaks stuff.

Tuesday, February 01, 2022

A Better LRU Cache for gSuneido

When I was working on gSuneido performance I discovered that the LRU cache in the standard library was one of the hot spots.

It was a simple design - a hash map of keys to values, plus an array of keys to track which was least recently used. When I wrote it, many years ago, it was just a quick addition. I had no idea it would be a heavily used bottleneck. Usage increased a lot when we added the easy ability to memoize a function.

I actually had to work backwards to find that LruCache was a bottleneck. What I first discovered (with the Go memory profiler) was that deepEqual was doing the majority of allocation. It was being called by SuObject Equal. The allocation was to track which comparisons were in progress to avoid handle self referential data. I was able to tweak deepEqual to greatly reduce the allocation. That removed the majority of the bottleneck.

But why were there so many object comparisons? I added debugging to print the Suneido call stack every 100 comparisons. It was coming from object.Remove which was being called by LruCache. To maintain its LRU list, it would move items to the most-recently-used end of the list by removing them and re-adding them. Since this was just an array, to find the item to remove it, it did a linear search. i.e. a lot of comparisons. Originally, caches had mostly been small, as you could tell from the default size of 10. But now the most common size was 200. 10 is reasonable for linear search, 200 is not so good.

In addition, originally keys were simple values like strings. But keys often ended up being several values in an object. The combination of linear searches through long arrays of composite values is what led to the bottleneck.

I decided to make LruCache built-in i.e. written in Go. But I also needed a new design that would avoid the linear searches. The normal way to track LRU is with a double linked list. I started in that direction but linked lists are not a good fit for modern CPU’s. Because the entries are individually allocated they are scattered in memory which leads to CPU cache misses. Contiguous arrays are much better because memories are optimized for sequential access. You could store the linked list entries in a contiguous block, but you still have the overhead of pointers (or indexes) and you still don't have locality or sequential access.

I ended up with the following design.

  • an unordered array of key, value pairs (entries)
  • a hash map of keys to entry indexes (hmap)
  • an array of entry indexes in LRU order

A successful lookup becomes slightly more complex - look in the hash table for the entry index, and then get the value from the entries array. Finally, we have to find that entry index in the lru array (a linear search but of small integers) and move it to the most recently used end.

The hash table lookup will still require comparing multiple argument objects, but much fewer than a linear search.

To replace the oldest item, we take the entry index from the least recently used end of the lru array. From that entry we get the key to remove it from the hash table. And we reuse that slot in the entries array to store the new key and value.

The new built-in LruCache was about 100 times faster than the old one. Overall, it improved the speed of our application test suite by about 5%. That’s pretty good for a couple days work.

ASIDE: There are two main aspects to a cache. The focus is usually on which items to evict i.e. least recently used or least frequently used. The other aspect is which items to add. The normal assumption is that you add everything. But actually, you really only want to add items you’re going to be asked for again. Otherwise you’re evicting potentially useful items. We can’t predict the future, but we can assume that items we see twice are more likely to be seen again. One approach is to use a bloom filter to track which keys you’ve seen and only add to the cache ones you’ve seen before.

Saturday, January 29, 2022

Checking Immutable Data

The database implementation in the Go version of Suneido relies a lot on immutable data that is shared between threads without locking.

The problem is that Go has no support for immutability. It doesn’t have const like C++ or even final like Java. (Go has const but only for simple numbers and strings.)

I was seeing some sporadic crashes and corruption. One possible cause was something accidentally modifying the theoretically immutable data. The immutable database state is a large complex set of linked structures processed by a lot of complex code. As with a lot of immutable persistent data structures you don’t copy the entire structure to make a new version, you basically path copy from each spot you want to change up the tree to the root. It's easy to miss copying something and end up mistakenly modifying the original.

I had recently been using checksums (hashes) to verify that the data read from the database file was identical to the data before it was written. (To verify the reading and writing code, not to check for hardware errors.)

I realized I could use the same checksums to detect erroneous changes to the immutable state. I ran a background go routine that looped forever, fetched the state (atomically), computed its checksum, waited a short time (e.g. 100 ms) and then computed the hash again (on the same state). If something had incorrectly mutated the state the checksum would be different.

And I actually found a few bugs that way :-)

Although unfortunately they were not the ones that were causing the crashes :-(

Saturday, May 15, 2021

Blogger Issues

Yesterday afternoon I wrote a blog post about my gSuneido progress. In the evening I got an email saying "Your post has been deleted" because "Your content has violated our Malware and Viruses policy."

The post was just some text and a couple of screenshots. It's hard to see how it could contain malware or viruses. Of course, it was gone, so I couldn't prove that. And of course, there was no human to contact.

It was a funny because it actually made me a bit upset. I think that was partly from the feeling of helplessness against the faceless Google behemoth. A bit like dealing with the government. And it's free, so what can you say?

This morning I got another email saying "We have re-evaluated the post. Upon review, the post has been reinstated." Who exactly is "we"? Somehow I doubt that was a human. Now our software gets to use the royal "we"? I suspect it would have been more honest to say "sorry, we screwed up"

It was still not showing up, but then I found they had put it back as a draft and I had to publish it again.

A quick search found someone else reporting a similar issue and Google responding with "we're aware of the problem".

It was a good reminder to back up my content. Not that there's anything too important, but it's of nostalgic interest to me, if nothing else. (You can download from the blog Settings.)

Friday, May 14, 2021

Another gSuneido Milestone

This screenshot probably doesn't look too significant - just the Suneido IDE. The only noticeable difference is down in the bottom left corner. Normally it would show something like:

Instead it shows:

That means gSuneido is running "standalone", i.e. using its own database instead of connecting as a client to a jSuneido database. While the surface difference is tiny, internally this is a huge jump.

I've been working away on the gSuneido database over the last year at the same time that we've been debugging and then rolling out the gSuneido client.

If I had just ported the jSuneido database implementation it would have been much easier, but what would be the fun in that. I kept the query implementation but redesigned the storage engine and transaction handling. I'd call it second system effect, but it's more like third system since I also redesigned this for jSuneido.

I still have lots to do. Although the IDE starts up, it's quite shaky and easily crashed. Many of the tests fail. But even to get to this point a huge number of pieces have to work correctly together. It's a bit like building a mechanical clock and reaching the point where it first starts to tick.

Sunday, March 14, 2021

Twenty Years of cSuneido

Last week was a milestone. We (my company) finished converting all our customers from cSuneido (the original C++ implementation) to gSuneido (the most recent implementation, in Go).

That means I no longer have to maintain cSuneido and I no longer have to deal with C++. (sigh of relief)

cSuneido has had a good run. We first deployed it to the first customer in 2000, so it's been in continuous production use for over 20 years. It's served us well.

When I first started developing Suneido, in the late 1990's, C++ was relatively new on PC's. I started with Walter Bright's Zortech C++ which became Symantec C++ (and later Digital Mars C++). Later I moved to Microsoft C++ and MinGW.

Suneido, like most dynamic languages is garbage collected. But C++ is not. I implemented a series of my own increasingly sophisticated conservative garbage collectors. But eventually I admitted my time was better spent on other things and I switched to using the Boehm-Demers-Weiser conservative garbage collector. Under normal use conservative garbage collection works well. But there are cases where memory is not recycled and eventually you run out. That was somewhat tolerable on the client side, but it wasn't so good on the server side. (That was one of the factors that prompted the development of jSuneido, the Java version that we use on the server side. Another factor was the lack of concurrency support in C++ at that time.) It seemed for a while that the pendulum was swinging towards garbage collection. But Rust has given new life to manual memory management.

Honestly, I won't be sorry to leave C++ behind. It has grown to be extremely complex, and while you can avoid much of that complexity, it's hard to not be affected by it. I've also had my fill of unsafe languages. Even after 20 years of fixing bugs, there are very likely still things like potential buffer overflows in cSuneido. (Ironically, one of the things that added a lot of complexity to C++ was template generics. Meanwhile I am anxiously waiting for the upcoming addition of generics in Go. However, Go's generics will be much simpler than C++'s Turing complete template programming.)

While it might seem crazy to re-implement the same program (Suneido) three times, it's been an interesting exercise. I've learned things each time, and made improvements each time. It's been extra work to maintain multiple versions, but it's also caught bugs that would have been missed if I'd only had a single implementation. Doing it in three quite different languages - C++, Java, and Go, has also been enlightening. And having the hard constraint of needing to flawlessly run a large existing code base (about a million lines of Suneido code) means I've avoided most of the dangers of "second system" effects.

So far I've only implemented the client side of gSuneido. We are still using jSuneido (the Java version) for the server side. I'm currently working on implementing the database/server for gSuneido (in Go). Once that's finished I intend to retire jSuneido as well and be back to a single implementation to maintain, like the good old days :-) And given where I'm at in my career gSuneido will almost certainly be the last implementation I do. I wonder if it will last as long as cSuneido did?

Wednesday, February 10, 2021

Tools: Joplin Notes App

I recently (Nov.) started using Joplin, "An open source note taking and to-do application with synchronization capabilities" and I'm quite happy with it.

I've been a long time Evernote user (over 10 years). Although it was a bit rough at first (see A Plea to Evernote) it has worked well for me. Like Joplin, it meets my requirement for something that runs on Windows, Mac, and phone/tablet, and works off-line. (Joplin also runs on Linux.)

I'm pretty sure Evernote is an example of Conway's Law at work. Their versions have the same overall features, but there are enough small differences to be quite annoying when you're switching back and forth. You'd think someone at a high level would push for consistency. It's stupid stuff like one version puts a button at the right and another at the left. Then they came out with a new Mac version that was missing a bunch of features, and made yet another set of differences.

I've looked for alternatives in the past, but haven't found anything that matched my needs. I can't remember where I came across Joplin. I hadn't found it when I looked in the past. I wasn't specifically looking for an open source solution, but it's nice that Joplin is open source. It seems to have an active and growing community and user base. 

One of the things I like about Joplin is that it's primarily Markdown. Most of my notes are plain text (see Taking Notes) but it's nice to have a little formatting at times. There is a new WYSIWYG editor, but previously it was the standard split screen edit & preview, or toggle back and forth. I mostly stay in the raw markdown mode.

One of the things I feared about switching was having to leave all my old notes behind in another program. But Joplin can import Evernote's export format. I haven't moved everything yet (there's a lot) but I transferred my Suneido notebook which is roughly 2000 notes. It took a little while to sync/upload and then sync/download on other devices but it worked well. There are a few formatting glitches, but that isn't surprising.

Interestingly, Joplin doesn't (yet) run its own sync servers. Instead you can use Nextcloud, Dropbox, OneDrive, WebDAV, or the file system. I already use Dropbox so that was the easiest for me. They are working on their own sync server software.

Since I got Joplin I have hardly touched Evernote. I use Joplin every day to keep notes on my work. If you're looking for a notes app it's worth checking out.

Monday, January 04, 2021

Unix: A History and a Memoir

I just finished reading Unix: A History and a Memoir by Brian Kernighan. I'm not much of a history buff, but I enjoyed it, mostly because it brought back memories.

By the time I was in high school I was obsessed with computers. My father somehow arranged for me to get a Unix account in the university computer science department. I'm not sure of the details - my father worked on campus, but wasn't part of the university, he was an entomologist (studied insects) with Canada Agriculture.

My father also arranged permission for me to sit in on some computer science classes. But my high school principal refused to let me take a few hours a week for it. I can't recall why, something about disrupting my school work. Which is totally ridiculous given that I was at the top of my classes. You wonder what goes through the minds of petty bureaucrats.

I remember being quite lost at first. I was entirely self taught which left plenty of gaps in my knowledge. I was intimidated by the university and too shy to ask anyone for help, But I muddled along, teaching myself Unix and C. This would have been in 1977 or 1978, so the early days of Unix. Of course, it was in the universities first.

I recall being baffled that C had no way to input numbers. For some reason I either didn't discover scanf or didn't realize it was what I was looking for. It wasn't really a problem, I just figured out how to write my own string to integer conversions. When the C Programming Language book (by Kernighan and Ritchie) came out in 1978, that helped a lot.

Software Tools (by Kernighan and Plauger) was probably the book I studied the most. I implemented a kind of Ratfor on top of TRS-80 Basic, and implemented most of the tools multiple times over the years. I still have my original copy of the book - dog eared, coffee stained, and falling apart. A few years later, I reverse engineered and implemented my own Lex and then Yacc. Yacc was a challenge because I didn't know the compiler-compiler theory it was based on. Nowadays there are open source versions of all this stuff, but not back then.

I read many more Kernighan books over the years, The Elements of Programming Style (with Plauger), The Unix Programming Environment (with Pike), The Practice of Programming (with Pike), and more recently, The Go Programming Language (with Donovan).

Nowadays, with the internet and Stack Overflow, it's hard to remember how much harder it was to access information in those days (especially if you didn't talk to other people). I had one meeting with someone from the computer science department.  (Again, arranged by my father, probably because I was so full of questions.) Looking back it was likely a grad student (he was young but had an office). I questioned him about binding time. He didn't know what I was talking about. I don't remember why I was fixated on that question. I must have seen some mention of it in a book. Me and unsolved questions/problems, like a dog and a bone.

Presumably the Unix man pages were my main resource. But if you didn't know where to start it was tough. I remember someone gave me a 20 or 30 page photocopied introduction to Unix and C. That was my main resource when I started out. Nowadays, I'd be hard pressed to make it through a day of programming without the internet.

Monday, December 14, 2020

Coverage for Suneido

Since the early days of Suneido I've thought that it would be good to have some kind of coverage tool. But for some reason, I never got around to implementing it.

Code coverage is usually associated with testing, as in "test coverage". Lately I've been seeing it in connection to coverage based fuzz testing.

But coverage can also be a useful debugging or code comprehension tool. When you're trying to figure out how some code works, it's often helpful to see which parts are executed for given inputs. You can also determine that by stepping through the code in a debugger, but if the code is large or complex, than can be tedious, and doesn't leave a record that you can study.

For some reason I started thinking about it again and wondering how hard it would be to implement in gSuneido.

One of my concerns was performance. If coverage is too slow, it won't be used. And obviously, I didn't want to slow down normal, non-coverage execution.

While simple coverage is good, I find statement execution counts more useful. Statement counts verge on profiling, although profiling is generally more concerned with measuring time (or memory).

That got me wondering about approaches to counters. One interesting technique I came across is Morris approximate counters. That would allow an effectively unlimited count in a single byte. But I decided that the nearest power of two is a little too crude. Although it's usually not critical that large counts are exact, it's often helpful to see that some code is being executed N times or N+1 times relative to other code or to inputs.

16 bit counts (up to 65,535) are probably sufficient but I didn't want wrap-around overflow. I knew arithmetic that doesn't overflow was a standard thing but I couldn't remember the name. Eventually I found it's called saturation arithmetic. Sometimes people talk about "clamping" values to maximums or minimums (from electronics).

Often, to minimize coverage work, tracking is done on basic blocks. Normally that's part of the compiler, but I didn't really want to mess with the compiler. It's complex already and I didn't want to obscure its primary function. I realized that instead, I could get equivalent functionality based on the branches in the byte code. Obviously if you branch, that's the end of a basic block. And where you branch to is the start of a basic block. So I only need to instrument the byte code interpreter in the branch instructions. Basically the interpreter marked the start of executed blocks (branch destinations) and the disassembler identifies the end of blocks (branch origins).

I decided this would be a fun weekend project and a break from working on the database code. That didn't work out so well. At first I made rapid progress and I had something working quite quickly. Then things went downhill.

If I was tracking at the byte code level, I needed to connect that back to the source level. I had a disassembler that could output mixed byte code and source code, so that seemed like the obvious place to do it. Then I found that the disassembler didn't actually work that well. I'd only ever used it as a tool to debug the compiler. I spent a bunch of time trying to make the disassembler cover all the cases.

Meanwhile, I was starting to get depressed that it was turning into such a mess. The more I worked on it, the uglier it got. I don't usually mind when I take a wrong turn or decide to throw out code. That's just the way it goes sometimes. But when I can't find a good solution, and things keep getting worse, then I'm not a happy camper.

In the end, I had something that mostly worked. I checked it into version control so I'd have a record of it, and then I deleted it. My idea of using branches to identify basic blocks was valid, but the actual implementation (that I came up with) was far from elegant.

I maybe should have given up at this point. The weekend was over, I had more important things to work on. But I still thought it would be a worthwhile feature. And if I came back to it later I'd just have to figure it out all over again.

Once I abandoned the ugly code I felt much better. I decided to take a simpler approach. I'd add an option to the code generation to insert "cover" instructions (instrumentation) at the beginning of each statement. That was just a few lines of code. Then I just needed to implement that instruction in the byte code interpreter - a handful more lines of code. The overhead was relatively small, somewhere in the neighborhood of 5 to 10 percent.

And that was the core of it. A bit more code to turn it on and off, and get the results. Way easier, simpler, and cleaner than my first "clever" approach. Hopefully it will prove to be a useful feature.

Tuesday, December 01, 2020

Leaving HEY

TL;DR - I used HEY for five months, gave it a fair trial I think, had no huge complaints, but I've gone back to Gmail. For the details, read on.

I was excited when Basecamp (formerly 37 Signals) announced their HEY email. I asked for and eventually received an invitation. I’ve been a long time Gmail user, from back when you needed an invitation. Back in the days when Google’s motto was “do no evil”. They’ve since replaced that motto with “get all the money”. Which doesn’t make me comfortable giving them all my email.

Basecamp has a good record of supporting old products. (I still hold a grudge over Google Reader.) And they are more privacy minded than Google. I didn't have a problem paying for the service. Paying customers are often treated better than the users of "free" services.

I like a lot of the HEY features - screening emails, spy blocking, reply later, set aside, paper trail, feed, etc.

One thing that bothered me was that it was a closed walled garden. Traditionally, email has been built on standards (like SMTP, POP, and IMAP). You can use Thunderbird to access your Gmail, use Apple Mail to access your Hotmail, etc. HEY lets you forward mail in or out, but that's as open as it gets. You can't access your HEY email from another client, and you can't use the HEY client to access other email accounts. Their explanation for this is that their special features aren't interoperable. I'm not sure I totally buy that. It seems like a more believable reason is that it simplifies what is undoubtedly a large challenge. And of course, the HEY software itself is not open source. I prefer open solutions, but I use other walled gardens, like the Apple ecosystem.

It was easy to redirect my Gmail and start using HEY. It was easy to learn and use. I was quite happy with it, and I must admit, a bit excited to be in on the beginning of something. Like when I first started using Gmail. But it wasn't long before I started running into annoyances.

On the positive side, it was nice being able to report an issue and actually get a response from what appeared to be a human. (Hard to tell with some of the bots these days.) Good luck with trying to report a problem to Gmail.

One of my big frustrations was not being able to view attachments. You could argue that's not really the responsibility of an email client. But I was accustomed to being able to view pdf's (e.g. resumes on job applications) with a single click. That single click in HEY just took me to a file save dialog. So I could download it (cluttering up my download folder and taking up storage) then find the downloaded file and then open it in a separate file. No more quick glance at the contents of an attachment. That was using the standalone Mac app. If I accessed HEY from my browser it was a little better (if I can convince Firefox I don't want to download it). The funny part was that HEY displays a thumbnail, and on iOS you can zoom in and read it. So obviously they were already interpreting the content, they weren't just treating them as opaque blobs. I told myself this was a minor issue but it continued to bug me.

There were quite a lot of bugs at first. In some ways that's not surprising for a new ambitious project. But I have to admit I was initially a little disappointed. I guess I've drunk a little too much of the Basecamp/David/Jason kool-aid and had high expectations. I told myself they would fix them, give them time. And some did get fixed. But others didn't. For example, the touted Feed uses infinite scrolling, except when it needs to load more content there's a noticeable pause and afterwards the view is messed up. You lose your scroll position and all the items are doubled. Not exactly a great experience. I can imagine most of the testing happened without enough data to hit that. They even mentioned it in a job posting as the kind of thing you might work on.

Then I listened to a podcast with David where he talked about how hard they'd worked to fix bugs after the release. But that they'd had to put that aside to work on their promised HEY for Work. Great, too busy adding features to fix bugs. Heard that story before. Then he went on to talk about how bugs are overrated, they're not really a big deal. You shouldn't feel bad about your bugs, they're "normal". They should have been playing "Don't worry, be happy". I'm exaggerating, I understand where he's coming from. And I agree there's a big difference between cosmetic bugs and functional bugs. And bugs that affect a minority of users versus ones that affect the majority. But it's a slippery slope. Where does that Feed issue fit? Is that a "cosmetic" issue to be shelved? To me it was a major problem, but I realize that's a judgement call.

To me, telling programmers not to worry about bugs is just asking for a bug filled product. And once you have a lot of bugs, it starts to feel pointless to fix them. Personally, I'm ok with feeling a little bad about my bugs. Not to the point of flagellation, but enough to make me e.g. write more tests next time.

I also found I'd (not surprisingly) grown accustomed, if not dependent, on a whole assortment of Gmail features. I screwed up and sent an email to the wrong recipient, which I would have caught with Gmail's undo feature. I was used to typing a few characters of a contact and having Gmail suggest the right person, whereas HEY constantly suggested contacts I never used. The Feed is a nice idea, but it's (currently) a pretty minimal feed reader. It doesn't keep track of what you've read, and if you get interrupted, there's no way to pick up where you left off. You have to scroll down and try to remember what you've read. I've switched to using Gmail filters to forward feed type material to my Feedly. Filters are another feature missing (or omitted) from HEY.

I'm not writing HEY off. I have my account and I don't mind having paid for it. I think they're trying to do something worthwhile and I don't mind supporting that. I'll keep an eye on it for potential future use.

I'm not completely happy going back to Gmail. I don't have anything particular to hide, but I'm not a fan of surveillance capitalism - of companies like Google making their profits from selling my private information, or the ugly things done with that information by the companies that buy it.

Wednesday, September 09, 2020

Checksums

I want to have checksums on parts of the database in gSuneido. In cSuneido I used Adler32, which is available in Go, but was it the best option? (Checksums are hash functions but designed for error detection, as opposed to hash functions designed for cryptography or for hash tables.)

I didn't find a lot of clear recommendations on the web. Adler32 is a variant of Fletcher and there was some question over which was better.

A Cyclic Redundancy Check (CRC) should be better at error detection than Adler or Fletcher. But most of what I could find talks about single bit errors. I'm more concerned with chunks of data not getting written to disk or getting overwritten. It's unclear how that relates to single bit errors.

I was also wondering if I could get away with a 16 bit (2 byte) checksum. But the Go standard library doesn't have any 16 bit checksums. I could use half the bits of a 32 bit checksum. In theory a good checksum should spread the information over all the bits, so it seemed reasonable. But how would that affect error detection?

One drawback to CRC's is that they don't handle zero bytes very well. And unfortunately, memory mapped files can easily have extra zero bytes (their default value) if data doesn't get written. But in that case the zero bytes would be instead of actual data, which should result in different checksums.

My other concern was speed. Adler and Fletcher are simpler algorithms than CRC so they should be faster. Adler uses a theoretically slower division than Fletcher, but the Go compiler converts division by constants into multiplication by the reciprocal so that shouldn't matter.

I wrote a quick benchmark (using Go's handy benchmark facility) and I was quite surprised to see that crc32 was much faster than adler32. Of course, the results vary depending on the details. This was for 1023 bytes of random data.

BenchmarkAdler32-24              3124998               332 ns/op
BenchmarkCrc32IEEE-24           11993930                91 ns/op
BenchmarkCrc32Cast-24           24993260                47 ns/op

I assume the speed difference is due to hardware (instruction set) support for CRC.

I found third party crc16 implementations, but I don't think they are using hardware support so I assume it will be faster to use half of crc32.

I also found that IEEE slowed down with random block sizes. e.g. a checksum of 1023 bytes was a lot slower than a checksum of 1024 bytes. Castagnoli was better in this respect, and supposedly it's also better at error detection.

I also did a bunch of manual experimenting with error detection. I found using 16 bits of crc32 worked quite well. (missed only one or two more errors out of 100,000 cases) It didn't seem to matter which 16 bits. (Dropping to 8 bits made a bigger difference, not surprisingly.)

So for now I'm going with 16 bits from crc32 Castagnoli.