Copy on write is an interesting technique with a wide variety of applications. It's somewhat related to persistent immutable data structures, which are really "partial copy on write". Basically it's just lazy or deferred but with the addition of reference counting.
It started when I happened to be looking at our memoize code. (That's the correct spelling, it's different than "memorize"). When it returns a mutable object, it makes a defensive copy. Otherwise, if the object was modified it would modify the cached value.
Defensive copies are a standard technique, but they're often inefficient because if the caller doesn't modify the object then the copy was unnecessary.
One solution is to make the cached values read-only. Then they can't be modified and you don't need a defensive copy. But this has two problems. One is that people forget to make it read-only, since it works fine without it. The other is that often you do need to modify the result and then all the callers have to copy.
My first thought was to add an explicit CopyOnWrite method. But most people wouldn't understand the difference or remember to use it. We could use it in Memoize, but that was quite limited.
Then I realized that it probably made sense to just make the existing Copy method always be copy-on-write i.e. deferred or lazy copying. That was assuming that I could implement copy-on-write with low enough overhead that the benefit would outweigh the cost.
The simplest naive approach is to mark both the original and the copy as copy-on-write. But then if you later modified them both, you'd end up making two copies, whereas with normal copying you'd only have made one copy. The solution is to keep a shared "copy count", similar to a reference count for memory management. If the copy count is zero, then you can just modify the object without copying it, since you know you won't affect any other "copies".
When you make a lazy copy, you increment the copy-count. When you do an actual copy to allow modification, you decrement the copy-count. Ideally you'd also decrement the copy-count when an object was garbage collected. (perhaps with the new runtime.AddCleanup in Go 1.24)
One catch is that the copy-count must be shared. At first I thought that meant I had to put the data and copy count in a separate object with an extra level of indirection for all references to the data. Then I realized it was only the copy count that had to be shared. So I just allocated it separately. That meant I could access it with atomic operations which have low overhead.
Luckily I had an existing test for concurrent access to objects. This failed with my changes. The race detector also found problems. Objects are locked while reading or writing. But with copy-on-write there are multiple objects referencing the same data. Locking an object isn't sufficient to protect the data. One solution would be what I previous considered - keeping the data and the copy count separately, along with a lock. But then we're back to too much overhead.
I found the problem was that I was decremented the copy count before doing the actual copy. But as soon as the copy count went to zero, another thread could think it was ok to modify. I had to decrement the copy count after the actual copy. But that meant checking if the copy count was 0 separately from the decrement, which meant there was potential for two threads to check the copy count, both find it was 1, and both copying the object. I decided this would happen very rarely, and the only cost was an extra copy.
For once my code was structured so it was quite easy to implement this. Copying was done in a single place and update methods all called a mustBeMutable method. It only took about 40 lines of code.
And pleasantly surprising, this abstraction wasn't leaky and it didn't break or affect any of our application code. Running our application tests there were roughly 500,000 deferred copies, and 250,000 eventual actual copies. So it saved half of the copying - nice!