Sunday, September 21, 2025

AI Coding Tools

A few people have asked about what tools I'm using for AI coding so I figured I'd snapshot what I'm using right now. Given how fast AI is changing it will probably be different six months from now. I haven't tested different tools extensively so don't take this as expert advice, just one data point.

For tab completion I've been using Amp Tab which is the tab completion part of Amp which is Sourcegraph's second generation AI coding tool (after Cody). Amp itself is a little too aggressive for my style of programming. I prefer to review changes closely before applying them. Currently Amp Tab is experimental and is still free. My understanding is that it's similar to Cursor, which I haven't tried because I prefer to use standard VSCodium. Even Amp Tab can be a little aggressive for me sometimes. I have to be careful about hitting tab to indent a line since it's liable to go and make changes to my code.

For investigating, reviewing, or writing code I've been using Cline as a VSCode extension. Cline is open source and lets you pick your AI model and provider. I've been using OpenRouter so I can try out different models. You can also use Cline as the provider which I might have done if I'd realized it before I signed up with OpenRouter.

Cline has a "planning" mode which is basically read-only, and an "act" mode where it makes code changes. Even in act mode I require approval for code changes.

As far as models, I started out with Claude Sonnet 3.5 and progressed to 3.7 and now 4. I've also tried a few others like GPT-5 and grok-code-fast-1. There isn't a huge difference, they can all do well or mess up badly but I tend to go back to Claude Sonnet 4 even though it's one of the more expensive ones. The last few months I've been spending about $50 per month on model usage. It's worth it for me, as much for the learning experience as for the actual code produced. If you didn't want to (or couldn't afford to) spend money on it, there are usually free or cheap options. 

For general research I've been using Gemini 2.5 Pro, mostly because it's included with our company Gmail/Google accounts. It works well to research algorithms or data structures.

The big question these days is whether programmers are actually more productive using AI. There have been studies that show that although programmers feel they're more productive, they're actually not. It sounds a bit like multi-tasking. I wouldn't say it's made a huge difference to my productivity. Some types of tasks go quicker, but for others AI can become a big time waster. I would say the quality of my code might be slightly higher from having more tests and more reviews.

Sunday, September 07, 2025

A Priority Queue with sync.Cond

The good news is Claude (Sonnet 4) found a bug for me.

The bad news is that Claude wrote the code that had the bug.

It's not all Claude's fault though. Garbage in, garbage out. When I searched on the internet for an example, I found a post with the same bug. I left a comment explaining the bug, hopefully it'll get corrected.

The history was somewhat amusing as well: (paraphrased)

Me: we could write a priority queue

Claude: that would be complex

Me: but we could encapsulate the complexity in the queue package

Claude: good idea

Me: write a priority queue (followed by the requirements)

Claude: no problem, here you go

The code for a concurrent producer-consumer queue is not that complicated, even with specific priority and ordering constraints.

I also got Claude to write tests and benchmarks and everything seemed to work well. The benchmarks showed it had comparable performance to a Go channel. I changed my code to use the new queue (replacing a simple Go channel) and it worked fine. It even appeared to show the hoped for performance improvements. I got Claude to write more benchmarks to measure performance better. But the benchmarks would hang sometimes. I tried to get Claude to fix it but it just thrashed around rewriting the benchmark different ways. I suspected a deadlock in the priority queue so I wrote a better test for that. It appeared to work fine, until I increased the number of threads to 16 or 32, then it would hang consistently with a deadlock. I gave the simple failing test to Claude and it immediately spotted the bug.

The "obvious" approach is to use a single condition variable (sync.Cond) for the queue. Put and Get wait on the Cond and then signal it. It seems straightforward, and under low concurrency it appears to work. But under high concurrency, it can deadlock. The problem is that Signal only wakes up one goroutine. If the queue is full and it wakes up a producer, or the queue is empty and it wakes up a consumer, it will deadlock.
One solution is to use Broadcast to wake up all waiters but that can be inefficient with many goroutines. The better solution is to use two sync.Cond, one for not-full and one for not-empty. Put waits on not-full and signals not-empty. Get waits on not-empty and signals not-full. I should have read the Wikipedia article instead of trusting AI generated code.

Go's sync.Cond has a bit of a funny story in itself. There is no example for it in the Go documentation. When someone suggested adding an example they were told that sync.Cond is tricky and therefore shouldn't be used and they didn't want to add an example because that would "encourage" people to use it. So it becomes a self fulfilling prophecy that people misuse it. At one point there was even a movement to remove sync.Cond from Go. It seems a little odd to me, since condition variables are a standard concurrency concept. It seems to me a few examples would prevent almost all the misuse.

Here's the priority queue code if you're interested.