Monday, March 24, 2025

Copy on Write

Copy on write is an interesting technique with a wide variety of applications. It's somewhat related to persistent immutable data structures, which are really "partial copy on write". Basically it's just lazy or deferred but with the addition of reference counting.

It started when I happened to be looking at our memoize code. (That's the correct spelling, it's different than "memorize"). When it returns a mutable object, it makes a defensive copy. Otherwise, if the object was modified it would modify the cached value.

Defensive copies are a standard technique, but they're often inefficient because if the caller doesn't modify the object then the copy was unnecessary.

One solution is to make the cached values read-only. Then they can't be modified and you don't need a defensive copy. But this has two problems. One is that people forget to make it read-only, since it works fine without it. The other is that often you do need to modify the result and then all the callers have to copy.

My first thought was to add an explicit CopyOnWrite method. But most people wouldn't understand the difference or remember to use it. We could use it in Memoize, but that was quite limited.

Then I realized that it probably made sense to just make the existing Copy method always be copy-on-write i.e. deferred or lazy copying. That was assuming that I could implement copy-on-write with low enough overhead that the benefit would outweigh the cost.

The simplest naive approach is to mark both the original and the copy as copy-on-write. But then if you later modified them both, you'd end up making two copies, whereas with normal copying you'd only have made one copy. The solution is to keep a shared "copy count", similar to a reference count for memory management. If the copy count is zero, then you can just modify the object without copying it, since you know you won't affect any other "copies".

When you make a lazy copy, you increment the copy-count. When you do an actual copy to allow modification, you decrement the copy-count. Ideally you'd also decrement the copy-count when an object was garbage collected. (perhaps with the new runtime.AddCleanup in Go 1.24)

One catch is that the copy-count must be shared. At first I thought that meant I had to put the data and copy count in a separate object with an extra level of indirection for all references to the data. Then I realized it was only the copy count that had to be shared. So I just allocated it separately. That meant I could access it with atomic operations which have low overhead.

Luckily I had an existing test for concurrent access to objects. This failed with my changes. The race detector also found problems. Objects are locked while reading or writing. But with copy-on-write there are multiple objects referencing the same data. Locking an object isn't sufficient to protect the data. One solution would be what I previous considered - keeping the data and the copy count separately, along with a lock. But then we're back to too much overhead.

I found the problem was that I was decremented the copy count before doing the actual copy. But as soon as the copy count went to zero, another thread could think it was ok to modify. I had to decrement the copy count after the actual copy. But that meant checking if the copy count was 0 separately from the decrement, which meant there was potential for two threads to check the copy count, both find it was 1, and both copying the object. I decided this would happen very rarely, and the only cost was an extra copy.

For once my code was structured so it was quite easy to implement this. Copying was done in a single place and update methods all called a mustBeMutable method. It only took about 40 lines of code.

And pleasantly surprising, this abstraction wasn't leaky and it didn't break or affect any of our application code. Running our application tests there were roughly 500,000 deferred copies, and 250,000 eventual actual copies. So it saved half of the copying - nice!

Saturday, March 15, 2025

Twenty Years

I happened to look at the list on the side of this blog and realized it's been twenty years since I wrote:

I'd better try out this blogging thing.

I've posted 710 times, not a huge number for that length of time, but more than a few. On average, about 35 a year, 3 per month. My most prolific year was 2009 when I averaged a post every 3 days.

Almost all of it is now irrelevant. Sometimes I think I should post more often, other times I think it's pointless and I shouldn't bother at all. But I don't regret the time I spent on it. If nothing else it often forced me to organize my thoughts and experiences and try to communicate them clearly. You could always imagine that someone would be helped by a post they found through a search engine. Of course, search engines are going downhill and increasily people rely on AI summaries, which makes blogs mere fodder for AI.

This blog has been on Google Blogger the whole 20 years. As much as I don't care for Google these days, back then they were the rising star. I can't think of another blogging service that has been around since then. It's amazing they haven't discontinued Blogger like they did Reader. One of these days I should move to a different platform.

Twenty years seems like a long time until I remember I've been programming for 50 years. As much as technology has changed, programming is really not that much different than it was 50 years ago. It's been a fascinating career. If AI replaces programmers, our generation of programmers might be the last/only to enjoy it.

Monday, March 03, 2025

return throw

In a programming language with exceptions (like Suneido), one of the API design questions is when to return an error, and when to throw it.

It's helpful to distinguish "queries" and "commands". (Command-query separation) A query returns a result. A command performs an action, it may or may not have a return value.

For dynamically typed languages like Suneido query functions can easily return an error value like false or a string. If you forget to check for errors it will usually cause a runtime error. But if you forget to check for errors with a command function, they'll be lost. It might cause a problem later but that can be hard to debug.

From the perspective of using the API, both approaches have pros and cons.

  • checking a return value is generally faster and less code than try-catch
  • it's too easy to forget to check a command return value, especially if failure is rare
  • you can't forget to check an exception (unless you have a try-catch somewhere that deliberately ignores exceptions)
  • return values have to be checked on every call, exceptions can be handled for larger scopes

C++ and C have nodiscard and Java has JSR-305 @CheckReturnValue annotation but these are static compile time checks, not a good fit for a dynamic language like Suneido.

I came up with the idea of "return throw" (avoiding the need for a new keyword). It returns a value like a regular return. But if the return value is not used (discarded) then it throws an exception.

As I started to use this, I realized it could be more dynamic. A successful return could be just a regular (ignorable) return. whereas error returns could be return throw.

if error
    return throw false
return true

That got a little repetitive so I changed return throw to automatically treat true and "" as success. i.e. a return throw of true or "" would be treated as a normal return and the result could be ignored. But a return throw of any other value e.g. false or "invalid foo" would throw if the result was ignored.

Another issue was if F() did return throw, and G() did return F() that shouldn't be treated as "using" the result. Luckily that turned out to be relatively easy to handle.

I made a bunch of the built-in functions return throw and that has helped track down a few issues. Otherwise it isn't too popular yet. Hopefully it will prove worthwhile in the long run.

Tuesday, February 18, 2025

Go, Cgo, and Syscall

I recently overhauled the Windows interface for gSuneido. The Go documentation for this is sparse and scattered and sometimes hard to interpret, so here are some notes on my current understanding of how Cgo and Syscall should be used.

The big issue is interfacing between typed and garbage collected Go and external code. Go doesn't "see" values in external code. If values are garbage collected by Go while still in use externally it can cause corruption and crashes. This is hard to debug, it happens randomly and often rarely and is hard to reproduce.

The other issue with Go is that thread stacks can expand. This means the stack moves, which means the addresses of values on the stack change. Unlike C or C++, in Go you don't control which values are on the stack and which are on the heap. This leads to the need to "pin" values, so they don't move while being referenced by external code. (There is also the potential that the garbage collector might move values in a future version.)

Go pointers passed as function arguments to C functions have the memory they point to implicitly pinned for the duration of the call. link

Note the "Go pointers" part and remember that uintptr is not a pointer. If you want to pass "untyped" pointer values to C functions, you can use void* which has the nice feature that unsafe.Pointer converts automatically to void* without requiring an explicit cast.

SyscallN takes its arguments as uintptr. This is problematic because uintptr is just a pointer sized integer. It does not protect the value from being garbage collected. So SyscallN has special behavior built-in to the compiler. If you convert a pointer to uintptr in the argument to SyscallN it will keep the pointer alive during the call. It is unclear to me whether this applies to Cgo calls. Do they qualify as "implemented in assembly"? Strangely, SyscallN has no documentation. And the one example uses the deprecated Syscall instead of SyscallN. And you won't even find SyscallN unless you add "?GOOS=windows" to the url.

If a pointer argument must be converted to uintptr for use as an argument, that conversion must appear in the call expression itself. The compiler handles a Pointer converted to a uintptr in the argument list of a call to a function implemented in assembly by arranging that the referenced allocated object, if any, is retained and not moved until the call completes, even though from the types alone it would appear that the object is no longer needed during the call. link

The example for this generally show both the uintptr cast and the unsafe.Pointer call in the argument, for example:

syscall.SyscallN(address, uintptr(unsafe.Pointer(&foo)))

My assumption (?) is that the key part is the uintptr cast and that it's ok to do the unsafe.Pointer outside of the argument, for example:

p := unsafe.Pointer(&foo)
syscall.SyscallN(address, uintptr(p))

What you do NOT want to do, is:

p := uintptr(unsafe.Pointer(&foo))
// WRONG: foo is not referenced or pinned at this point
syscall.SyscallN(address, p)

To prevent something from being garbage collected prematurely you can use runtime.KeepAlive which is simply a way to add an explicit reference to a value, to prevent it from being garbage collected before that point in the code. KeepAlive is mostly described as a way to prevent finalizers from running, but it also affects garbage collection.

KeepAlive marks its argument as currently reachable. This ensures that the object is not freed, and its finalizer is not run, before the point in the program where KeepAlive is called. link

However, it does not prevent stack values from moving. For that you can use runtime.Pinner

Pin pins a Go object, preventing it from being moved or freed by the garbage collector until the Pinner.Unpin method has been called. A pointer to a pinned object can be directly stored in C memory or can be contained in Go memory passed to C functions. If the pinned object itself contains pointers to Go objects, these objects must be pinned separately if they are going to be accessed from C code. link

Passing Go strings to C functions is awkward. Cgo has C.CString but it malloc's so you have to make sure you free. Another option is:

buf := make([]byte, len(s)+1) // +1 for nul terminator
copy(buf, s)
fn((C.char*)(unsafe.Pointer(&buf[0])))

If you don't need to add a nul terminator, you can use:

buf := []byte(s)
fn((C.char*)(unsafe.Pointer(&buf[0])))

Or if you are sure the external function doesn't modify the string, and you don't need a nul terminator, you can live dangerously and pass a direct pointer to the Go string with:

fn((C.char*)(unsafe.Pointer(unsafe.StringData(s)))

One thing to watch out for is that Go doesn't allow &buf[0] if len(buf) == 0. If it's possible for the length to be zero, you can use unsafe.SliceData(buf) instead.

There is a certain amount of run time checking for some of this.

The checking is controlled by the cgocheck setting of the GODEBUG environment variable. The default setting is GODEBUG=cgocheck=1, which implements reasonably cheap dynamic checks. These checks may be disabled entirely using GODEBUG=cgocheck=0. Complete checking of pointer handling, at some cost in run time, is available by setting GOEXPERIMENT=cgocheck2 at build time. link

Sorry for the somewhat disorganized post. Hopefully if someone stumbles on this it might help. Or maybe it'll just be gobbled up and regurgitated by AI.

See also:
Cgo documentation
Go Wiki: cgo
Addressing CGO pains, one at a time

Saturday, November 09, 2024

Current Tools

I'm posting this mostly because it's interesting to look back and see how things change.

Hardware

Desktop - 27" 2017 iMac Pro. Considering it's 8 years old, performance is still quite reasonable. It's a 3.2 GHz 8 core Intel Xeon with 64 GB of DDR4 memory.

Laptop - 16" 2019 MacBook Pro with a 2.3 GHz 8 core Intel Core i9, also with 64 GB of DDR4 memory.

Apple could almost certainly have sold me a new machine, if they still made 27" iMacs, and if they didn't charge a fortune to get 64 GB of memory. Also, moving to an Apple cpu would make it harder to run a Windows VM.

Keyboard - For the last 5 years I've been using Varmilo mechanical keyboard with Cherry MX Silver switches.

NAS - Synology DS920+ NAS with four 10 TB drives for 26 TB of redundant capacity.

Wifi - Netgear Orbi RBR50 + Satellite

Software

IDE/editor - VSCodium - the open source version of VSCode, without tracking. Prior to VSCode I used Microsoft Visual Studio for C++ and Eclipse for Java.

Programming Language - Go - I made a bet on Go over 10 years when it was still new. Back then it was barely production ready. Luckily, it has improved hugely and has become relatively mainstream. Like any language, it has its quirks, but for the most part I'm happy with it. Rust is interesting, but Suneido needs garbage collection. I don't miss C++ or Java.

Version Control - Git & Github

Parallels VM's for Windows and Linux

Browser - Firefox - I'm not particularly a fan of Firefox and I'd rather Mozilla would focus on making a good browser instead of going off on tangents like the AI bandwagon. But I hate the thought of Google having a 100% monopoly on browsers. I could use Safari but I'm not crazy about the megacorp that Apple has become either.

Notes - Obsidian - I used Evernote for a long time, then moved to Joplin, and finally to Obsidian for the last few years. I like that it doesn't have a proprietary database, just markdown files in directories. And I'm happy with its wysywyg markdown editor. Unfortunately, it's not open source.

Dropbox - I used to rely on Dropbox to keep my office and home computers in sync. Now that I work from home full time it's more to keep my desktop and laptop in sync. It's also nice to be able to access files from my tablet or phone occasionally.

Antivirus - Bitdefender

Cloud Backups - Backblaze

Sync - Chronosync - to mirror my 5 TB of photos to the Synology NAS

Local Backups - Restic - More for developers than consumers but seems to work well. I've tried various other backup programs and haven't liked any of them.

Apple's Time Machine is a great concept but it's never been reliable for me. It'll stop backup up without any kind of notification. Or the backups will become corrupted and you have to start over. I suspect the combination of using a NAS and having a huge number of files is too much for it.

Sunday, November 03, 2024

Go range over integer gotcha

Go version 1.22 introduced the ability to do a for loop ranging over an integer. For example:

for i := range 5

This is normally explained as equivalent to:

for i := 0; i < 5; i++

Recently I decided to go through my code and update to the new style. It was mostly fairly mechanical search and replace. But when I finished and ran the tests, there were multiple failures. I figured I'd just made a typo somewhere in my editing. But that wasn't the case. The places that were failing were modifying the loop variable. For example:

for i := 0; i < 5; i++ {
    fmt.Println(i)
    i++
}
=> 0, 2, 4

for i := range 5 {
    fmt.Println(i)
    i++
}
=> 0, 1, 2, 3, 4

I don't see anything in the language spec about this.

The accepted proposal says:

If n is an integer type, then for x := range n { ... } would be completely equivalent to for x := T(0); x < n; x++ { ... }, where T is the type of n (assuming x is not modified in the loop body).

There might be some discussion of this but I didn't find it. It may relate to how closures capture the loop variable. In Go 1.22 this was changed so each iteration has its own iteration variable(s). So if each loop gets its own variable that kind of explains why modifying it doesn't have any effect on subsequent iterations.

But even if you declare the variable outside the for statement, it still behaves the same:

var i int
for i = range 5 {
    fmt.Println(i)
    i++
}
=> 0, 1, 2, 3, 4

It's not really a major issue, it just means if you want to modify the loop variable you need to use the old style of for loop. The unfortunate part is that you can't mechanically replace one with the other. You have to check whether the body of the loop modifies the variable. I don't typically do this but occasionally it can be useful.

Thursday, October 03, 2024

String Allocation Optimization

This post follows on from the previous one. (Go Interface Allocation) After realizing the allocation from assigning strings to interfaces (common in gSuneido) I started thinking about how this could be improved. I had a few ideas:

  • predefine all 256 one byte strings (common when looping over a string by character)
  • implement a "small string" type (i.e. a one byte size and 15 byte array in 16 bytes, the same space as a Go string pointer + size)
  • intern certain strings with the new Go unique package (unique.Make returns a Handle which is a pointer so it could be stored in an interface without allocation)

Unfortunately, the last two require introducing new string types. That's feasible (gSuneido already has an additional SuConcat string type to optimize concatenation) but it would take some work.

Luckily the first idea was easy and didn't require a new string type.

func SuStr1(s string) Value {
    if len(s) == 1 {
        return SuStr1s[s[0]]
    }
    return SuStr(s)
}

var SuStr1s = [256]Value{
    SuStr("\x00"), SuStr("\x01"), SuStr("\x02"), SuStr("\x03"),
    ...
    SuStr("\xfc"), SuStr("\xfd"), SuStr("\xfe"), SuStr("\xff"),
}

It didn't make much difference overall but it was easy to see the improvements on specific types of code. For a simple change I think it was worth it. (Thanks to "AI" for auto-completing most of that 256 element array.)

See the commit on GitHub