Friday, September 22, 2023

Poisoning Workers

As in worker threads, not people.

gSuneido uses an unbounded thread pool to handle client requests to the server. Most Go thread pools are intended to limit the number of goroutines e.g. to match the number of hardware threads. But in this case I want all the requests to execute in parallel, even if it wasn't the most efficient, so that all the requests made progress. Normally in Go you don't need an unbounded thread pool, goroutines are usually fast enough that you can just start a goroutine for each request. However, if the goroutine will need to allocate memory, either for the stack or for other things, then creating new ones every time becomes more expensive. One option is to use a sync.Pool to avoid the allocation. But that doesn't solve the cost of stack growth. (e.g. issue 1838)

In the old version of my code, each worker was responsible for terminating itself if it was idle. This was implemented with a timer that was reset by each request.

This seemed straightforward. I tested it and it worked fine. But then recently I added some metrics, including the number of workers, and I noticed the number of workers rarely went down. Definitely not dependably after the 5 second timeout. I puzzled over this for a while before I realized what was happening. If multiple worker goroutines are waiting to receive from the same channel, it's unpredictable which worker will receive the message. This doesn't seem to be defined in the language spec so I'm not sure if it's random (as the select statement is defined to be) or if the receivers are queued and processed first-in-first-out. It doesn't really matter for my issue. The bottom line is that tasks end up distributed over workers. So even with a low level of activity, workers tend to get at least one task per 5 seconds, and therefore they don't terminate.

I could dream up lots of complicated approaches but it seemed like there should be a simple solution. I could probably get away with just not ever killing workers. But then if a spike in load creates a bunch of workers, they will hang around forever, which isn't the best behavior. Searching the web, I found a lot of introductory material about goroutines and channels, and a lot of bounded thread pools, but not much on unbounded ones. I found lnquy/elastic-worker-pool, but that seemed more complicated than I needed.

Eventually I realized I could make the pool a "leaky bucket" i.e. just periodically kill a worker. That would ensure the pool would eventually shrink. This is very simple and doesn't require anything complex like trying to measure the load. The downside is that even if you're in a steady state with the correct number of workers, it's still going to kill workers and they'll end up being recreated. But a worker pool is really just a cache and a cache doesn't need to have 100% hit rate to be effective. I made a slight improvement by preventing killing too soon after creating a new worker goroutine.

I implemented this by having a killer goroutine that sends a "poison pill" to the same channel as the tasks. One of the workers would receive this and terminate itself.

The killClock and nworkers are atomic since they are accessed from multiple threads. The workers just needed to recognize the poison pill and terminate. The submit code just need to reset the killClock to zero whenever it created a new worker goroutine.

For example, if the interval is 1 second and the delay 10 seconds, in a steady state a worker would be killed and then recreated every 10 seconds. Given that creating a worker is almost fast enough to do on every task, that's very minor overhead.

Hopefully I haven't overlooked any flaws in this solution. It seems simple and fast.

As usual the code is on Github

Sunday, September 10, 2023

A gSuneido Database Optimization

While working on the recent gSuneido database issue I had an idea for optimizing the transaction conflict checking code.

The current code kept a list of outstanding transactions and for each a list of tables and their reads and writes. To check a new read or write meant a linear search through all the transactions for other activity on that table. If the data was organized by table rather than by transaction, it would eliminate the linear search. An action on a "rare" table wouldn't have to look at so much data. Some tables would be common to many transactions, but still normally not all of them. And even in a situation where all the outstanding transactions included a particular table it wouldn't be any slower than before.

At a high level it was a small change, but I wasn't sure how it would fit with the code. I knew I'd still need to keep a list of transactions and the tables they had accessed. It turned out to be relatively straightforward to modify the code. (A few hours work.) One of the questions was where to embed data and where to use pointers to separately allocated data. Embedding tends to be faster, but embedding in maps or slices can use more memory because empty slots are larger.

Before starting the changes I wrote a Go benchmark so I could see what effect the changes had on performance. As usual, it was hard to know what a "typical" pattern of activity was. For this benchmark the new version was over twice as fast. That was a bigger difference than I'd expected. Sadly, but not surprisingly, the change made no difference to the speed of our application test suite. It doesn't do many concurrent transactions so it wouldn't benefit. However, the stress test I'd used to investigate the "too many outstanding transactions" issue did show big improvements. When I collected the same measurements as before and graphed them, it looked even better. Up to three threads it didn't make much difference but after that it was substantially better. And performance didn't drop off under load like it had before. (Or at least, not until much higher load.)

The conventional wisdom for optimization would be to profile the code and find the hot spots. That wouldn't have helped in this case. The impact of the data structure was spread over the code. Nor would profiling have given any insight into a better data structure. As I commonly find, what is required is to have good mental models of the problem, the code, the programming language, and the hardware. Then you need to find an approach that is a good fit between all of these. Of course, the hardware performance model is a moving target. These days it's mostly about cache misses and branch prediction.

I'm always happy to improve performance, especially by significant amounts. But in the back of my mind I can't help thinking I should have written it better the first time around. Of course, that doesn't make sense because you can't write the "perfect" version. There's always going to be room for improvement.

Saturday, September 02, 2023

A gSuneido Database Issue

We've been gradually converting our customers' servers from jSuneido to gSuneido. Recently we converted a larger customer and they started to get "too many outstanding transactions" errors. Unfortunately, we'd changed multiple things at once with this customer, so it wasn't immediately clear whether this issue was because of the switch to gSuneido, but I suspected it was.

I had set the limit in gSuneido to the same value as jSuneido - 200 outstanding transactions. "outstanding" in this case either means uncompleted in progress, or completed but overlapping with an uncompleted one.

It seemed a little counterintuitive. gSuneido is faster than jSuneido. So why should it run into this more? My theory is that because it's faster, more transactions get processed and if some of them are slow this will mean more overlapping transactions, even though they completed quickly.

One of my first thoughts was to look at optimizing the gSuneido code. But if my theory was right, this could actually make the problem worse rather than better.

We temporarily solved the problem by adding slight delays to code that was using a lot of transactions. (i.e. we throttled or rate limited them). This worked but it seemed a little ugly to slow it down all the time, just to avoid a problem that only occurred occasionally.

That led me to add throttling to gSuneido, but only when the outstanding transactions got high (e.g. 100). Throttling would prevent hitting the limit (200). I wrote a stress test and this did solve the problem. And it was better than adding delays in the application code. But the problem was that once it throttled, it got really slow. If you throttled enough to prevent hitting the limit e.g to 10 transactions per second, then 1000 transactions would go from maybe .1 seconds to 100 seconds - ouch! And because throttling slowed things down so much, it increased the chances of transactions hitting the 20 second time limit. The cure seemed worse than the disease, it won the battle but lost the war. (Sorry for the proverbs.)

I started to wonder what the right limit was. 200 was an arbitrary value. The reason to have a limit was to prevent it from "falling over" under heavy load. But at what point would that happen? So I removed the limit and tested under different loads.

threads time tran/sec conflicts max trans
1 2 sec 5000 100 70
2 2 sec 10000 150 110
3 3 sec 10000 200 150
4 4 sec 10000 200 180
5 4 sec 12500 200 200
6 5 sec 12000 300 230
8 8 sec 10000 300 270
10 10 sec 10000 300 300
16 20 sec 8000 400 450
20 30 sec 7000 500 490
30 50 sec 7000 600 580

I was quite happy with the results. Performance did drop off with more than 5 or 6 threads, but not that badly. The maximum outstanding got well above the old limit of 200, but that didn't seem to be a major problem. The number of transactions aborting due to conflicts increased with load, but that's expected with that much concurrent activity. To some extent it seemed to be self limiting. i.e. the load itself slowed things down and effectively did its own throttling.

Of course, this is completely dependent on what my test is doing. Is it "typical"? No, probably not. For starters, it's doing continuous activity on multiple transactions as fast as it can, with no other work. Real application code is not going to be doing that. Our applications never do 10,000 transactions per second for long periods. The test does result in shorter and longer transactions, but the longest are only around 1 second, whereas in production we do see transactions taking much longer and sometimes hitting the 20 second limit (usually as a result of external factors). And my workstation only has 8 cores / 16 threads, so it's not really 30 threads.

Out of curiosity, I removed the limit from jSuneido to see how it would perform. 10 threads took 15 seconds or about 50% slower. That wasn't bad. But roughly half the transactions (5000) failed due to conflicts. Ouch! That wasn't so good. Even with only one thread, roughly 10% (1000) conflicted. As opposed to 1% (100) with gSuneido. The usual method of dealing with conflicts is to retry, but that would just increase the load and make things worse.

So what is the correct limit for gSuneido? Or should there be no limit? Should it ever throttle, and if so, at what point? Honestly, I still don't know the answers to those questions. But it does appear the limit could be considerably higher e.g. 500 instead of 200, which would likely eliminate the problem, other than rare extreme cases.

Thursday, July 20, 2023

Keyword Recognition

Daniel Lemire's recent blog post Recognizing string prefixes with SIMD instructions and one he linked to Modern perfect hashing for strings prompted me to revisit how I was recognizing keywords in gSuneido. Although I find it interesting how you can use advanced processor instructions, I didn't really want to get into that. But like most things, even without looking at my code, I figured it could probably be improved.

I found that I was recognizing language keywords (28) with a linear search, sorted by frequency of occurrence. For query keywords I was using a Go map. Both were quite fast, certainly not a bottleneck.

The reason I hadn't used a Go map for language keywords was that I wanted to "intern" the strings. And there's no way to retrieve the key of a Go map (unless you loop through all of them). Normally it doesn't matter since you need the key to do a lookup in the first place. But the key I'm doing a lookup with is a slice from the source code. If I hold onto that key, it will keep the entire source code in memory (because the reference prevents garbage collection).

One of the first things I tried (Compile-time switches from the Modern article) was one of the fastest (4x faster than the old linear search) - switch on the length, and then switch on the i'th character (chosen for each length). For example:

switch len(s) {
case 2:
    switch s[1] {
    case 'f':
        if s == "if" {
            return tok.If, "if"
        }
    case 's':
        if s == "is" {
            return tok.Is, "is"
        }

I find it a little surprising that this turned out to be the fastest since it has quite a few branches - two switches plus a string comparison. Using Compiler Explorer to look at the assembler, I found the code was quite good. I was expecting a call to a string comparison function, but it is unrolled inline. It may actually be beneficial that the switches are explicit so each branch can get its own prediction.

Based on the assembler, I simplified the code. This was just as fast, smaller source code, and smaller machine code.

switch len(s) {
case 2:
    if s == "if" {
        return tok.If, "if"
    }
    if s == "is" {
        return tok.Is, "is"
    }

Because it was comparing constant strings and it knew the length matched, it compared them as multi-byte integers instead of strings. 2, 4, and 8 bytes matched int16, int32, and int64. For lengths of 3, 5, 6, and 7 it used a combination e.g. 6 characters was an int32 and an int16.

I did try various other approaches.

I tried finding "perfect" hash functions and table sizes, which turned out to be trickier than I thought. It was almost as fast as the switch version.

I tried another hash table implementation, but it was slower than the Go map.

I tried switching on the length and then doing a linear search.

In the end I went with the switch version since it was fastest. It was a little more verbose, but Tabnine handled most of the code generation.

Did this micro-optimization make any difference to the overall speed? No, not that I saw from a quick test. But it was an interesting investigation. And yes, premature optimization can be a mistake. But ignoring speed is also a mistake.

Sunday, April 23, 2023

New Regular Expression Implementation for gSuneido

Suneido has its own regular expression implementation. 25 years ago, when I wrote the original C++ cSuneido implementation, there weren't a lot of regular expression libraries, and I already had an implementation from a previous product, so that's what I used. With the Java version, I initially tried to use the Java standard regular expressions. But it was too hard to get exactly the same behavior, so I ended up using my own implementation. And I did the same thing with the Go version of Suneido.

But I haven't been totally happy with my implementation. It uses backtracking (like many other implementations) which means it has some bad worst cases. I had to put limits on the backtracking, which meant we couldn't use regular expressions in some places, which meant hand writing code to replace them. To make it worse, there was no easy explanation of what kinds of regular expressions were ok.

Recently I re-encountered this series of articles about implementing regular expressions.

Regular Expression Matching Can Be Simple And Fast
Regular Expression Matching: the Virtual Machine Approach
Regular Expression Matching in the Wild

I'd read them before but at that point I'd been satisfied with what I had. The articles made it seem easy to implement a non-backtracking approach to avoid the worst case behavior. Of course, it didn't turn out to be quite that easy to write a full implementation. Naturally, the articles didn't cover all the details.

It started out as a weekend project, but it ended up taking me longer than that, partly because I implemented optimizations for literal strings and single pass expressions.

Because Suneido is ASCII only (due to its age), the code is a little simpler and faster than the Go standard library version which handles UTF-8.

The Go regular expression package compiles to linked structs that are relatively large. I chose to compile to byte code stored in a string. I like to use strings because they are one of the few constant immutable types in Go. And string references are only 16 bytes compared to 24 bytes for byte slices. There's probably a bit more overhead processing the byte code, but it's a more compact chunk of memory so it'll be more cache friendly.

Humorously (to me) when I was fuzz testing my implementation against the Go standard library implementation I discovered a bug in the Go code. As expected, one of the initial responses was denial, blaming it on my lack of understanding. Thankfully, someone else (not on the Go team) confirmed the problem and even located a possible location of the bug. I thought a bug in something like regular expressions would be treated fairly seriously, but over a month later it's still just labeled as "NeedsInvestigation" rather than "NeedsFix". It was put under the Go1.21 milestone at least.

The new implementation is about 2000 lines of Go code. Probably half of that I was able to reuse from the previous version. As usual the code is on GitHub

Saturday, April 08, 2023

Full Text Search

Suneido uses full text search for its help and wiki. jSuneido used Lucene, a natural fit for Java. I needed something different for gSuneido. I looked at Bleve which seems to be the most common Go full text search. But it was big and slow, and the indexes were large. I had used Lunr.js on a previous project and it had worked well. It's small and fast and easy to use. It's JavaScript, but we display our help and wiki in a browser window so that seemed ok.

We (thanks Jatin) got it working, but it was a little ugly. Our customer systems have different combinations of applications and options so we built custom indexes on each system. But for Lunr we were using Node.js to build the indexes and we didn't want to install Node on all our customer systems. So we had to build an index containing "everything", ship that to our customers (every time we updated the help), and then filter the search results based on their configuration.

We only use our Wiki in-house, but with it there was another issue. People edit the wiki all the time. With Lucene, we could update the index with changes, but Lunr.js doesn't support that, you have to rebuild the whole index.

Eventually I got frustrated with the "friction" of using Lunr and decided to continue my long tradition of re-implementing the wheel. I'd considered writing my own full text index/search previously but had resisted the urge. But I had a weekend, and nothing urgent to do, so I decided to give it a try. I started with Let's build a Full-Text Search engine, which makes it sound easy, but mostly because the simple version it describes isn't complete.

Tokenizing is easy, although there are questions about exactly what constitutes a token. Obviously, sequences of letters, but what length? I decided to ignore single letters and also ignore anything over a certain length (e.g. 32 characters). Especially in our wiki we also wanted to search on numbers. There's also the issue of punctuation, which I'm currently ignoring.

I used the Go Snowball (Porter2) stemmer mentioned in the article. A stemmer simplifies words. For example, fishing, fished and fisher are reduced to the base form (stem) fish. That reduces the number of terms in the index and makes searching more forgiving.

I got sidetracked into looking at bitmap data structures like roaring bitmaps (as suggested in the article), but then I started looking at how to implement search scoring and found that bitmaps weren't sufficient, I needed counts per term per document, not just true/false.

I decided to use BM25 scoring like Lucene and Lunr. I found a good article that explained it and even gave some examples I could check my results against.

It came together surprisingly easily and by the end of the weekend I had about 1000 lines of Go code that could create indexes and search them. It was fast and the indexes were small (compared to Lunr). The results seemed comparable. I felt a little guilty because it meant throwing out all the work that went into trying to use Lunr. For a change I felt like I should have written my own version sooner.

Wednesday, March 22, 2023

Java 20

I don't program much in Java these days, just minimal maintenance on jSuneido. But I still have an interest in what's happening with the language.

Once upon a time, Java had a problem - no new releases for a long time. They solved that by switching to a fixed release every six months. For a while we got all kinds of new features. It was great. But lately it's been a new problem, the releases have been on time, but they haven't "released" much. Here's the complete list of features for the latest release:

Features

429: Scoped Values (Incubator)
432: Record Patterns (Second Preview)
433: Pattern Matching for switch (Fourth Preview)
434: Foreign Function & Memory API (Second Preview)
436: Virtual Threads (Second Preview)
437: Structured Concurrency (Second Incubator)
438: Vector API (Fifth Incubator)

Every feature is either incubator or preview. I understand the need for incubator and preview. But "Fourth Preview" and "Fifth Incubator"? Yes, you want to get it "right", but if you wait for perfect in the eyes of committees, you could be waiting a long time.

I wonder if this is partly a matter of Java's "enterprise" association. Let's face it, the hip kids aren't using Java. They don't care if e.g. Java Virtual Threads are ever released. On the other hand the enterprise customers don't really want new features. They're still running Java 8. But they do want to be reassured that Java is alive and well. Previews and incubators serve that purpose just fine.

Maybe I'm just impatient. I'm excited about some of the new features (and others, like Valhalla that have been "coming soon" for many years but haven't even made it to incubation). But I don't have that many decades left.

Wednesday, February 15, 2023

Go Telemetry

There has been a big debate recently over the proposal to add telemetry to Go. 

It started with Russ Cox's multi-part Transparent Telemetry

I read the proposal and it seemed well thought out. I could understand the need to get more information about how Go was actually being used. Collecting a few anonymous counters seemed relatively benign compared to the "big" (i.e. invasive) data being collected by seemingly everyone these days.

Naively, I didn't foresee the big push back in the discussion at telemetry in the Go toolchain which was eventually locked after 506 comments. (518 thumbs down to 118 thumbs up)

I must admit I have a few qualms myself because it's Google. Go is it's own team, and I would say they have a good track record, but it's still Google paying their salaries and running their servers.

One point I missed until reading the discussion was that they would "temporarily" collect traffic logs with IP addresses. Supposedly this data would just be discarded, but how long until someone at Google decides they could "use" this data?

I think part of the knee jerk reaction was because it's a compiler. That seems wrong somehow. It's a bit reminiscent of the Ken Thompson hack. We may not like it, but these days we accept that Facebook and Apple etc. are going to track us. VS Code is one of the most popular editors, and it sends large amounts of telemetry. (I keep meaning to switch to VSCodium) I used to always opt in to sending telemetry because I wanted to help the developers. Nowadays I opt out of everything I can because it seems that most of it is just spying.

I don't have a lot to add to the debate. But I do have an idea/proposal that might help. How about if the telemetry was collected and published by a third party, someone with a vested interest in not abusing it. Perhaps someone like the Electronic Frontier Foundation. The proposal was already saying the data would be public. The Go team could access it from the public source just like anyone else. The Go team would still control the actual telemetry code, but since they wouldn't be collecting the data, it would be pointless to "sneak in" extra information.

It's a bit sad that it's almost impossible to collect legitimate data because so many big companies have abused data collection.

Monday, February 13, 2023

A Go Generics Technique

Say you want to make a generic hash map in Go. One of the questions is whether hash and equals should be methods on the key type, or whether they should be functions supplied when creating an instance. In general Go recommends passing functions. One of the advantages of this approach is that the functions can be closures which then have access to context.

In my cases I had a mix of uses. In several cases I already had hash and equals methods (from a previous incarnation). In several other cases I need context so closures would be better.

After a certain amount of head scratching I came up with a way to handle both.

Normally a generic hash map would be parameterized by key and value types. I added a third "helper" type. This type supplies the hash and equals functions. I created two helpers - one that calls methods on the key, and one that stores references to supplied hash and equals functions.

To use this the helper type you need an instance. A neat Go trick is that the helper that calls the methods can be struct{} - a valid type that is zero size, so no space overhead.

Getting the type constraints right took some experimentation. The key and value types do not have any constraints (any). The helper is parameterized by the key type. The helper that calls methods obviously is constrained by an interface with those methods. It confused me that the constraints that would normally be on the key type get moved to the helper, but I guess that makes sense because it is specific helpers that have requirements.

PS. Go has a built-in map type that is "generic" (but predates generics in the language). The problem is that it only works with types that have built-in hash and equals. If you need to write your own hash and equals, you can't use it.

Wednesday, February 01, 2023

Fuzz Testing Database Queries

The gSuneido database was ready for production, as far as I could tell. All our tests passed.

But there was a steady trickle of bugs showing up. Sure, they were obscure cases, but they were showing up in actual application code, so they weren't that obscure.

Every time I'd think it was ready to deploy further, another bug would show up.

I finally decided I had to do something to flush out these obscure bugs. Waiting for them to show up in production was not the way to go.

I had thought about trying to fuzz test database queries before, but it always seemed too hard. Even if I could figure out a way to generate random but legal queries, how would I check the results? Then I realized I could compare jSuneido and gSuneido. It wasn't a perfect test since they were roughly the same design and therefore could have matching bugs. But I had made enough changes and improvements to gSuneido that they were now significantly different. And even if it wasn't a perfect test, it was a heck of a lot better than nothing.

I puzzled over how to generate valid random queries. And what kind of data to use. I started with a simple set of four tables with obvious joins between them. The joins seemed to be the most constrained element, so I started with simply picking randomly from a fixed list of joins.

e.g. cus join ivc

I represented the query as a tree of nested objects and wrote a function to randomly pick a sub-expression branch.

unions are added by randomly picking a branch and replacing it with: branch union branch

e.g. (cus join ivc) union (cus join ivc)

leftjoins are added by randomly replacing some of the join's.

Then where, rename, extend, and remove (project) are added at random spots.

(((cus where ck = 12) join ivc) union ((cus leftjoin ivc) extend x1)) remove c1

Finally it optionally adds a sort.

The resulting queries aren't necessarily realistic, but they seemed to cover most of the variations.

It's a little more ad-hoc than I originally hoped. You could generate random queries from the grammar, and they would be valid syntax, but queries also have to have valid semantics, and that isn't represented in the grammar.

First I just parsed and optimized the queries, no execution. This soon triggered some assertion failures which uncovered a few bugs.

The next step was to compare the actual results between gSuneido and jSuneido. I decided the simplest approach was to calculate a checksum of the result and output queries and their result checksums to a text file. Then I could re-run those queries on jSuneido and compare the checksums.

In the end I found about a dozen bugs. Several of them were in jSuneido, which is a little surprising since it has been in production with thousands of users for about 10 years.

The problem with finding a bug through fuzzing, is that the inputs it finds are often messy. Some fuzzing systems will try to simplify the inputs for you. I just did it manually, starting the debugging of each failure by simplifying the query as much as possible while still retaining the failure. Fuzzing can make finding bugs easier, but it doesn't really help with fixing them. And the more you fuzz, the more obscure the bugs get. On the other hand, by the end I got pretty good at tracking them down, inserting prints or assertions or using the debugger.

I added all the failed queries to a "corpus" that gets run every time, along with new random queries as a defense against regressions. In theory they would get generated randomly again, but that could potentially take a long time.

I called it success when I ran about 500,000 queries (roughly two hours of processing) with no differences. The plan is to add this to our continuous test system and fuzz for an hour or so per night when we have spare compute power. That should prevent regressions and possibly find even more obscure bugs.

I'm pretty happy with how this turned out. It only took me a few days work to write the fuzzing system (not counting the time to actually fix the bugs), and it flushed out a bunch of bugs that I'm sure would have haunted me for a long time otherwise.

Of course, in hindsight I should have done this 20 years ago, or at least 10 years ago when I wrote a second implementation (jSuneido). Sometimes I'm a little slow! But better now than never.

Maybe the counterpart to Eric Raymond's "given enough eyeballs, all bugs are shallow" is "given enough fuzzing, all bugs are visible". (Of course, it's never "all", but you get the idea.)

Thursday, January 26, 2023

AI

I used to think AI (when it finally "arrived") would help compensate for the many irrationalities of Homo sapiens. What can I say, I grew up on a steady diet of science fiction.

And now AI is arriving in the form of ChatGPT. And it successfully duplicates the failings of human brains. I could already get all kinds of biases, misconceptions, confabulations, and outright lies from humans, amplified by the internet. Now I can get it from AI too.

Even in the specialized area of programming assistance, I'm somewhat skeptical. Tabnine is a good, helpful tool. It claims it wrote 15% of my code in the last month. But when I review the highlights, they're one or two short lines of code. Not much 'I' in that AI. To me, the challenge in coding is writing coherent well organized code. Copying and pasting snippets will never achieve that. It seems to me it's just going to encourage even more boilerplate. Why not, when it's generated for you. Think how many lines a day you can "write". Most code is crap (including mine), and that's what these models are trained on. Even if you wanted to train it on "good" code, where do you find that? Who is going to judge it?

Perhaps this is just a temporary situation and AI will solve these problems. I'm not optimistic for the near future because it seems inherent in the current approach.

Although, there is perhaps some cause for hope: https://writings.stephenwolfram.com/2023/01/wolframalpha-as-the-way-to-bring-computational-knowledge-superpowers-to-chatgpt/

Meanwhile, after billions of dollars worth of research, self driving cars are currently a flop. I suspect that's a temporary setback.

Interesting times.

Wednesday, January 04, 2023

Three Weeks for Three Tweaks

It started with reports of slow database queries. At first I didn't pay too much attention, after all, some queries are slow. We're dealing with quite large data (in a small business sense, not in Google terms) and customers are choosing what to select on and how to sort.

It progressed to the same query being slow sometimes and fast other times. That seemed a little more suspicious, but there were still a lot of factors that might have explained it.

Finally someone came up with a simple example where the same query, ran on the same data, was 1000 times slower when you ran it slightly differently. That was bad news for me, since it definitely meant there was a problem with the query optimization. Why was it picking such a slow strategy sometimes, when there was an obviously better strategy. Especially when it was such an extreme difference. The optimization shouldn't have to be very accurate when there is a 1000 times difference!

It didn't turn out to be a bug, the code was working as designed. It was just a particular scenario that wasn't handled well by the current design.

After 20 years, the easy improvements have been made and I'm into the realm of diminishing returns. I ended up needing to "tweak" three aspects of the code in order to fix the issue. And all three were required before I could see if it was going to work. TL;DR - it did solve the problem.

One way of looking at the issue was that the optimization was getting caught by a local minimum. It wasn't smart enough to use a sub-optimal strategy in one spot in order to allow a better strategy overall. (It doesn't explore every possible strategy, for a complex query that would be far to slow.)

None of the changes were difficult, but they were all somewhat invasive, requiring changes to all the query operations.

Background

In case you actually want to try to follow what I'm talking about, here's a little background on how Suneido implements queries.

A query like:

(table1 join table2) union table3

gets parsed into a tree like:

where the leaves are database tables and the other nodes are operations. Each node has one or more strategies for execution, plus some parameters like choice of index.

Optimization starts with calling the optimize method on the root operation (union in this case) which then calls optimize on each of its sub-queries.

Execution works similarly by calling the get method on the root operation, which then "pulls" data from its sub-queries.

Tweak #1

Lets say we have a query like:  

table1 join table2 where id=1

During optimization, query operations can ask their children for their estimated number of rows. In this case join would get the table size from table1 e.g. 1000 and one (1) from the where since it's selecting on a unique id. But that information isn't sufficient to estimate the fan-out of the join.

So the first change I made was to return the "population" count in addition to the row count. The where could return 1 for the result count and e.g. 10,000 for the population i.e. table2 size. This allows the join to estimate a more realistic fan-out of 1:10.

Tweak #2

The next problem is when a temporary index is required, there are two costs - the "fixed" cost of building the index, and the "variable" cost of reading from it. But until now these had been combined into a single "cost". Other operations usually have only a variable cost.

Continuing the example from Problem #1, now the join can estimate the fan-out is 1:10, it can estimate it's only going to read 10 records from table2. So the variable cost will be low, but we'll still incur the entire fixed cost. So in this case we want to minimize the fixed cost. But to do that, I needed to separate the cost into fixed and variable parts everywhere.

Tweak #3

The remaining problem was that join didn't have any way to tell the sub-query that only a small part of the result would be read. To address this, I added a "fraction" argument to allow queries to tell their sub-queries how much of the result they estimated they would read.

In the running example, 10 rows from 10,000 would be a fraction of .001 Where before it would choose a strategy with e.g. a fixed cost of 1000 and a variable cost of 1000 (total 2000) over one with a fixed cost of 0 and a variable cost of 5000 (total 5000). Now the 5000 be multiplied by .001 giving 5, which is obviously preferable to 2000.

This allowed the optimization to avoid the local minimum.

Conclusion

I'm not sure how common the problem scenario is. It was only because we ran into such an extreme case (1000 times slower) that I got motivated to investigate. Perhaps less extreme cases also occur and will be improved by these changes. As far as I can reason, the changes should not make any cases worse.