Friday, August 01, 2025

OverIter Cur Deleted

gSuneido has had a long standing assertion failure that happened maybe once a month which equates to about once per million user hours. We tried multiple times but were never able to recreate it.

I’ll try to explain the scenario. Indexes are stored in the database file as btrees. While the database is running, index updates are buffered. An Overlay consists of the base btree plus layers of updates. When a transaction commits it adds an ixbuf layer to the Overlay’s that it updated. Background threads merge the layers and update the btree. OverIter handles iterating over an Overlay. It merges iterators for each layer. This is fundamentally straightforward but as always, the devil is in the details. The ixbuf layers from the transactions include updates and deletes as well as additions. So the merging has to combine these. For example, combining an add followed by several updates. The final piece of the puzzle is that concurrent modifications can occur during iteration.

"OverIter Cur deleted" means that the current value of the iterator is marked as deleted. This should never happen. The iterator is designed to skip deleted records.

The error occurred again recently and I decided to see what Claude (Sonnet 4) could do with it. It kept saying "now I see the problem", but when I'd tell it to write a test to make it happen it couldn't. It became obvious it wasn't going to spot the bug by just looking at the code so I got it writing tests. It wrote a lot of them, and they all passed. That was actually kind of nice since it meant the code was fairly robust. I wouldn't have been surprised if other bugs had showed up with the intense testing.

Finally, it wrote a random stress test that caused the assertion failure. I was cautiously optimistic. Sadly, it turned out it was generating invalid test data and that was what was triggering the assertion failure. Once I corrected the data generation, then the test passed. It was possible that the bug was actually leading to bad data which was leading to the assertion failure but that seemed unlikely since bad data would cause other problems.

Back to the drawing board. I continued extending the test to cover more scenarios. Eventually I managed to recreate the error with legitimate data and actions. After that it was a matter of extracting one failing case from the random test. Claude added print statements, looked at the result, and wrote a test for that specific sequence.

Once I had a relatively simple failing test, Claude claimed to find the bug right away. I was skeptical since it had claimed to find the bug many times already. I worked on extending the test to cover more of the scenarios. Sure enough, it started failing again. Claude came up with another fix. But the test kept failing. The proposed changes would fix certain cases but break other cases. No combination of the “fixes” solved all the problems.

Eventually Claude proposed rewriting one of the key functions. I was even more skeptical but the code looked reasonable and it was simpler and clearer than the old code. It wasn't very efficient but it wasn't hard to tell Claude how to optimize it. But it still didn’t fix all the problems. I dug into the other “fix” and realized it was on the right track but wasn’t quite complete. A little back and forth came up with a solution here as well. And finally I had a version of the code that passed all the tests.

I am cautiously optimistic. By this point I think the tests are fairly comprehensive. And I understand the fixes and they make sense.

As I've come to expect, my results with Claude were mixed. It definitely did some of the grunt work of writing tests. And I have to give it credit for giving up on some of my old code and writing a simpler, more correct version. But it also came up with several incorrect, or at least incomplete, fixes.

I started this thinking I'd spend a few hours playing with AI. It ended up being my main project for a week. Even though it was rare enough that it wasn't really a problem, I'm glad I finally fixed it. (I hope!)

Wednesday, July 16, 2025

Partnering with AI

I'm not a leading edge AI user. I have used Tabnine for the last few years, but mostly as a smarter auto-complete. Handy and time saving but nothing earth shaking. Mostly I was a skeptic. But using Cline with Claude Sonnet 4 lately has definitely changed my attitude.

One of the first things I noticed was the "rubberducky" effect. Just the effort of explaining a problem to AI often helps think of a solution. Explaining a problem to another person (or an AI) is even better because they ask questions and offer solutions. Claude often says "I see the problem" when it's totally off track. That's a little annoying, but it's no worse than what a person would do. And often those wacky ideas can spark some new ideas or avenues to explore.

A more surprising aspect is that it feels like I'm collaborating. I've been solo programming for most of my (long) career, but the last five years working remotely has exaggerated that. No one reads the majority of my code or comments on it. It's not that I'm socializing with the AI, but suddenly I have someone I can "talk" to about it. What does this do? Do we need that? What if we did xyz? Isn't that dangerous? It might not "know" the code like a true collaborator, but it's a very close approximation. And it never hurts to be exposed to other ways of doing things.

I don't vibe code with AI. I want the end code to be as good as I can make it. And with current AI that means keeping a very close watch on what it's doing. Like a beginning programmer, it seldom gets a good solution on the first try. It often requires reviewing changes closely and not being afraid to reject them. Often, you have to steer it to an elegant solution. I'm working on heavily used production code, not a throwaway toy project.

The agent approach seems to be a fundamental improvement. It's impressive to watch Claude add debugging statements, run the code, "look" at the output, and repeat until it tracks down the issue. At the same time, it can also make quite blatant mistakes so you need to be watching closely.

Honestly, I miss writing code when I'm working with AI. It's like pair programming with someone who never gives you the keyboard. Of course, there's nothing stopping me from writing some of the code myself, and I do. But when Claude can spit it out faster than I can type, it seems pointless not to let it.

One of Claude's quirks is that it has a positive tone. It's always saying "good idea" or "you're absolutely right". I found that a little goofy at first, but once I got used to it I find I like it. Who doesn't like positive feedback? I know it's meaningless, but I find when I use less positive models, I miss it.

Monday, July 07, 2025

Super Instructions

The overhead of an interpreter depends on the size of the instructions compared to the dispatching. So one of the ways to reduce the overhead is to increase the size of the instructions. One way to do that is “super instructions” - combining several instructions into one. This caught my interest and I decided to see whether it would help with Suneido. The Suneido byte code instructions themselves are mostly simple, which means a higher overhead, but most of the time is spent in built-in functions like database queries, which means a lower overhead.

The first step was to determine if there were combinations of instructions that were common enough to justify optimizing them. I added some quick temporary instrumentation to the interpreter. First I looked at sequences of two instructions. The most common accounted for roughly 7% of the instructions. The top 10 made up 40% of the instructions. That was actually higher than I expected. I also looked at sequences of 3 and 4 instructions. The most common was 3% so I decided to keep it simple and stick to 2 instruction sequences.

% op1 op2
7.0 Load Value
6.7 Value CallMethNoNil
5.2 Value Get
4.8 Load Load
4.5 This Value
2.7 Store Pop
2.5 This Load
2.3 Get Value
2.2 Pop Load
2.2 Global CallFuncNoNil
40.1 Total

The compiler code generator funnels through a single emit function so it was easy to modify that to detect the sequences and replace them with combined instructions. And it was easy to add the new instructions to the interpreter itself. If I’d stopped there it would have been a quick and easy one day project. But, of course, I had to benchmark it. At first that went well. Micro benchmarks showed clear improvement as I expected. But running our application test suite showed a slight slowdown of 1 or 2%. That puzzled me.

One possibility was that it slowed down the compiler. Running the test suite requires compiling all the code. The changes did slow down the compiling slightly but compiling is fast enough that it doesn’t even show up in profiling. I optimized the code slightly but that wasn’t the problem.

That left the interpreter. gSuneido uses a giant switch to dispatch instructions. Go doesn’t allow some of the other dispatching methods you could use in e.g. C(++). I had added 10 cases to the switch. Did that slow it down? How did Go implement switches? Searches didn’t turn up a lot, but it sounded like it used a binary search. Running the code in the debugger confirmed this. Maybe adding more cases had added another level to the binary search? I tried rewriting the switch as a hand written binary search using if-else, optimized for the most frequent instructions. That quickly turned into an ugly mess. More searching found Go issue 5496 to implement switches with jump tables. The issue appeared to be completed. I used Compiler Explorer to test and sure enough, Go compiled dense integer switches as jump tables. But not my interpreter switch??? I tried simplifying the switch, ordering the cases, removing fallthrough, etc. But the debugger still showed a binary search. I found the implementation in the Go compiler source code. It seemed straightforward; there were no special requirements. I tried one of my simple examples from Compiler Explorer inside my gSuneido project. That didn't show as a jump table in the debugger either. What the heck!

Back to the Go compiler source code. There was a condition on base.Flag.N, what was that? Oh crap. That's the -N command line option that disables optimization, which is used by the debugger. So whenever I was using the debugger to look at the assembly language code, I was seeing the unoptimized version. A little digging and I figured out how to use gdb to disassemble the production code and found it had been using a jump table all along. Argh!

Back to the original question - why was it slower? Maybe the extra code in the interpreter was affecting inlining? I looked at the compiler inlining logging but everything still seemed to be getting inlined correctly.

Looking at the cpu profile, the interpreter time decreased as I'd expect. The only noticeable increase was in garbage collector time. But the changes don't do any allocation. They would actually decrease the size of compiled code slightly. The allocation from the test suite was slightly higher. But the memory profile didn't show anything.

In the end, I gave up. I can't figure out why the overall test suite seems slightly slower. There doesn't seem to be any evidence that it's a result of the changes. Everything else shows an improvement so I'm going to keep it. I can only guess that the slowdown is a result of perturbing the code, affecting layout, or caching, or branch prediction, or timing. Modern cpu's are so complex that they are somewhat non-deterministic.

Code is in Github as usual.

Monday, March 24, 2025

Copy on Write

Copy on write is an interesting technique with a wide variety of applications. It's somewhat related to persistent immutable data structures, which are really "partial copy on write". Basically it's just lazy or deferred but with the addition of reference counting.

It started when I happened to be looking at our memoize code. (That's the correct spelling, it's different than "memorize"). When it returns a mutable object, it makes a defensive copy. Otherwise, if the object was modified it would modify the cached value.

Defensive copies are a standard technique, but they're often inefficient because if the caller doesn't modify the object then the copy was unnecessary.

One solution is to make the cached values read-only. Then they can't be modified and you don't need a defensive copy. But this has two problems. One is that people forget to make it read-only, since it works fine without it. The other is that often you do need to modify the result and then all the callers have to copy.

My first thought was to add an explicit CopyOnWrite method. But most people wouldn't understand the difference or remember to use it. We could use it in Memoize, but that was quite limited.

Then I realized that it probably made sense to just make the existing Copy method always be copy-on-write i.e. deferred or lazy copying. That was assuming that I could implement copy-on-write with low enough overhead that the benefit would outweigh the cost.

The simplest naive approach is to mark both the original and the copy as copy-on-write. But then if you later modified them both, you'd end up making two copies, whereas with normal copying you'd only have made one copy. The solution is to keep a shared "copy count", similar to a reference count for memory management. If the copy count is zero, then you can just modify the object without copying it, since you know you won't affect any other "copies".

When you make a lazy copy, you increment the copy-count. When you do an actual copy to allow modification, you decrement the copy-count. Ideally you'd also decrement the copy-count when an object was garbage collected. (perhaps with the new runtime.AddCleanup in Go 1.24)

One catch is that the copy-count must be shared. At first I thought that meant I had to put the data and copy count in a separate object with an extra level of indirection for all references to the data. Then I realized it was only the copy count that had to be shared. So I just allocated it separately. That meant I could access it with atomic operations which have low overhead.

Luckily I had an existing test for concurrent access to objects. This failed with my changes. The race detector also found problems. Objects are locked while reading or writing. But with copy-on-write there are multiple objects referencing the same data. Locking an object isn't sufficient to protect the data. One solution would be what I previous considered - keeping the data and the copy count separately, along with a lock. But then we're back to too much overhead.

I found the problem was that I was decremented the copy count before doing the actual copy. But as soon as the copy count went to zero, another thread could think it was ok to modify. I had to decrement the copy count after the actual copy. But that meant checking if the copy count was 0 separately from the decrement, which meant there was potential for two threads to check the copy count, both find it was 1, and both copying the object. I decided this would happen very rarely, and the only cost was an extra copy.

For once my code was structured so it was quite easy to implement this. Copying was done in a single place and update methods all called a mustBeMutable method. It only took about 40 lines of code.

And pleasantly surprising, this abstraction wasn't leaky and it didn't break or affect any of our application code. Running our application tests there were roughly 500,000 deferred copies, and 250,000 eventual actual copies. So it saved half of the copying - nice!

Saturday, March 15, 2025

Twenty Years

I happened to look at the list on the side of this blog and realized it's been twenty years since I wrote:

I'd better try out this blogging thing.

I've posted 710 times, not a huge number for that length of time, but more than a few. On average, about 35 a year, 3 per month. My most prolific year was 2009 when I averaged a post every 3 days.

Almost all of it is now irrelevant. Sometimes I think I should post more often, other times I think it's pointless and I shouldn't bother at all. But I don't regret the time I spent on it. If nothing else it often forced me to organize my thoughts and experiences and try to communicate them clearly. You could always imagine that someone would be helped by a post they found through a search engine. Of course, search engines are going downhill and increasily people rely on AI summaries, which makes blogs mere fodder for AI.

This blog has been on Google Blogger the whole 20 years. As much as I don't care for Google these days, back then they were the rising star. I can't think of another blogging service that has been around since then. It's amazing they haven't discontinued Blogger like they did Reader. One of these days I should move to a different platform.

Twenty years seems like a long time until I remember I've been programming for 50 years. As much as technology has changed, programming is really not that much different than it was 50 years ago. It's been a fascinating career. If AI replaces programmers, our generation of programmers might be the last/only to enjoy it.

Monday, March 03, 2025

return throw

In a programming language with exceptions (like Suneido), one of the API design questions is when to return an error, and when to throw it.

It's helpful to distinguish "queries" and "commands". (Command-query separation) A query returns a result. A command performs an action, it may or may not have a return value.

For dynamically typed languages like Suneido query functions can easily return an error value like false or a string. If you forget to check for errors it will usually cause a runtime error. But if you forget to check for errors with a command function, they'll be lost. It might cause a problem later but that can be hard to debug.

From the perspective of using the API, both approaches have pros and cons.

  • checking a return value is generally faster and less code than try-catch
  • it's too easy to forget to check a command return value, especially if failure is rare
  • you can't forget to check an exception (unless you have a try-catch somewhere that deliberately ignores exceptions)
  • return values have to be checked on every call, exceptions can be handled for larger scopes

C++ and C have nodiscard and Java has JSR-305 @CheckReturnValue annotation but these are static compile time checks, not a good fit for a dynamic language like Suneido.

I came up with the idea of "return throw" (avoiding the need for a new keyword). It returns a value like a regular return. But if the return value is not used (discarded) then it throws an exception.

As I started to use this, I realized it could be more dynamic. A successful return could be just a regular (ignorable) return. whereas error returns could be return throw.

if error
    return throw false
return true

That got a little repetitive so I changed return throw to automatically treat true and "" as success. i.e. a return throw of true or "" would be treated as a normal return and the result could be ignored. But a return throw of any other value e.g. false or "invalid foo" would throw if the result was ignored.

Another issue was if F() did return throw, and G() did return F() that shouldn't be treated as "using" the result. Luckily that turned out to be relatively easy to handle.

I made a bunch of the built-in functions return throw and that has helped track down a few issues. Otherwise it isn't too popular yet. Hopefully it will prove worthwhile in the long run.

Tuesday, February 18, 2025

Go, Cgo, and Syscall

I recently overhauled the Windows interface for gSuneido. The Go documentation for this is sparse and scattered and sometimes hard to interpret, so here are some notes on my current understanding of how Cgo and Syscall should be used.

The big issue is interfacing between typed and garbage collected Go and external code. Go doesn't "see" values in external code. If values are garbage collected by Go while still in use externally it can cause corruption and crashes. This is hard to debug, it happens randomly and often rarely and is hard to reproduce.

The other issue with Go is that thread stacks can expand. This means the stack moves, which means the addresses of values on the stack change. Unlike C or C++, in Go you don't control which values are on the stack and which are on the heap. This leads to the need to "pin" values, so they don't move while being referenced by external code. (There is also the potential that the garbage collector might move values in a future version.)

Go pointers passed as function arguments to C functions have the memory they point to implicitly pinned for the duration of the call. link

Note the "Go pointers" part and remember that uintptr is not a pointer. If you want to pass "untyped" pointer values to C functions, you can use void* which has the nice feature that unsafe.Pointer converts automatically to void* without requiring an explicit cast.

SyscallN takes its arguments as uintptr. This is problematic because uintptr is just a pointer sized integer. It does not protect the value from being garbage collected. So SyscallN has special behavior built-in to the compiler. If you convert a pointer to uintptr in the argument to SyscallN it will keep the pointer alive during the call. It is unclear to me whether this applies to Cgo calls. Do they qualify as "implemented in assembly"? Strangely, SyscallN has no documentation. And the one example uses the deprecated Syscall instead of SyscallN. And you won't even find SyscallN unless you add "?GOOS=windows" to the url.

If a pointer argument must be converted to uintptr for use as an argument, that conversion must appear in the call expression itself. The compiler handles a Pointer converted to a uintptr in the argument list of a call to a function implemented in assembly by arranging that the referenced allocated object, if any, is retained and not moved until the call completes, even though from the types alone it would appear that the object is no longer needed during the call. link

The example for this generally show both the uintptr cast and the unsafe.Pointer call in the argument, for example:

syscall.SyscallN(address, uintptr(unsafe.Pointer(&foo)))

My assumption (?) is that the key part is the uintptr cast and that it's ok to do the unsafe.Pointer outside of the argument, for example:

p := unsafe.Pointer(&foo)
syscall.SyscallN(address, uintptr(p))

What you do NOT want to do, is:

p := uintptr(unsafe.Pointer(&foo))
// WRONG: foo is not referenced or pinned at this point
syscall.SyscallN(address, p)

To prevent something from being garbage collected prematurely you can use runtime.KeepAlive which is simply a way to add an explicit reference to a value, to prevent it from being garbage collected before that point in the code. KeepAlive is mostly described as a way to prevent finalizers from running, but it also affects garbage collection.

KeepAlive marks its argument as currently reachable. This ensures that the object is not freed, and its finalizer is not run, before the point in the program where KeepAlive is called. link

However, it does not prevent stack values from moving. For that you can use runtime.Pinner

Pin pins a Go object, preventing it from being moved or freed by the garbage collector until the Pinner.Unpin method has been called. A pointer to a pinned object can be directly stored in C memory or can be contained in Go memory passed to C functions. If the pinned object itself contains pointers to Go objects, these objects must be pinned separately if they are going to be accessed from C code. link

Passing Go strings to C functions is awkward. Cgo has C.CString but it malloc's so you have to make sure you free. Another option is:

buf := make([]byte, len(s)+1) // +1 for nul terminator
copy(buf, s)
fn((C.char*)(unsafe.Pointer(&buf[0])))

If you don't need to add a nul terminator, you can use:

buf := []byte(s)
fn((C.char*)(unsafe.Pointer(&buf[0])))

Or if you are sure the external function doesn't modify the string, and you don't need a nul terminator, you can live dangerously and pass a direct pointer to the Go string with:

fn((C.char*)(unsafe.Pointer(unsafe.StringData(s)))

One thing to watch out for is that Go doesn't allow &buf[0] if len(buf) == 0. If it's possible for the length to be zero, you can use unsafe.SliceData(buf) instead.

There is a certain amount of run time checking for some of this.

The checking is controlled by the cgocheck setting of the GODEBUG environment variable. The default setting is GODEBUG=cgocheck=1, which implements reasonably cheap dynamic checks. These checks may be disabled entirely using GODEBUG=cgocheck=0. Complete checking of pointer handling, at some cost in run time, is available by setting GOEXPERIMENT=cgocheck2 at build time. link

Sorry for the somewhat disorganized post. Hopefully if someone stumbles on this it might help. Or maybe it'll just be gobbled up and regurgitated by AI.

See also:
Cgo documentation
Go Wiki: cgo
Addressing CGO pains, one at a time