After my last post I was thinking more and more about switching from single pass compiling to compiling via an Abstract Syntax Tree (AST).
I wasn't really intending to work on this, but I had a bit of time and I thought I'd just investigate it. Of course, then I got sucked into it and 10 days later I have the changes done. I'm pretty happy with the results. A big part of getting sucked into it was that it became even more obvious that there were a lot of advantages.
Previously I had lexer, parser, and code generator, with the parser directly driving the code generator. Now I have lexer, parser, AST generator, and code generator.
I'm very glad that from the beginning I had separated the parser from the actions. The parser calls methods on an object that implements an interface. This seems obvious, but too many systems like YACC/Bison and ANTLR encourage you to mix the actions in with the grammar rules. And doing this does make it easier to at first. But it is not very flexible. Because I'd kept them separate, I could implement AST generation without touching the parser. Although I haven't used it, I understand Tatoo splits the parser and the actions. (Note: Although I initially tried using ANTLR for jSuneido, the current parser, like the cSuneido one, is a hand written recursive descent parser.)
While I was at it, I also split off the parts of the code generator that interface with ASM to generate the actual byte code. The code generator is still JVM byte code specific, but at a little higher level. And now it would be easy to use an alternative way to generator byte code, instead of ASM.
When the parser was directly driving the code generation, I needed quite a few intermediate actions that were triggered part way through parsing a grammar rule. These were no longer needed and removing them cleaned up the parser quite a bit.
Having the entire AST available for code generation is a lot simpler than trying to generate code as you parse. It removed a lot of complexity from the code generation (including removing all the clever deferred processing with a simulated stack). And it let me implement a number of improvements that weren't feasible with the single pass approach.
Despite adding a new stage to the compile process, because it was so much simpler I'm pretty sure I actually ended up with less code than before. That's always a nice result!
The only downside is that compiling may be slightly slower, although I suspect the difference will be minimal. Code is compiled on the fly, but the compiled code is kept in memory, so it only really affects startup, which is not that critical for a long running server.
In retrospect, I should have taken the AST approach with jSuneido right from the start. But at the time, I was trying to focus on porting the C++ code, and fighting the constant urge to redesign everything.
Friday, October 29, 2010
Monday, October 25, 2010
Continuous Deployment
I listened to a podcast recently about a company doing continuous deployment. I was interested because my company does continuous deployment, and not just of a web application, but of a desktop app. From what I can tell, this is still quite rare.
They were asked how frequently they deploy, and they said every two weeks. I was a little surprised. Obviously it's a long way from yearly releases, but to me, every two weeks is not exactly "continuous".
Part of the issue is that "continuous" often gets mixed up with "automated". For example, continuous build or integration systems are as much about the automation as they are about continuous. But the primary goal is the "continuous" part. The automation is a secondary requirement needed to satisfy the primary goal of continuous.
Of course, "continuous" seldom actually means "continuous". It usually means "frequent". Continuous builds might happen nightly, or might be triggered by commits to version control.
Our continuous deployment means daily, Monday to Thursday nights. We don't deploy Friday in case there are problems and we don't have anyone working on the weekends.
Our big nightmare is that something goes wrong with an update and all of our roughly 500 installations will be down. Of course, we have a big test suite that has to pass, but tests are never perfect. To reduce the risk we deploy to about ten "beta" sites first. Everyone else gets that update two days later. Having ten beta sites down is something we can handle, and they're aware they're beta sites so they're a little more understanding. In practice, we've had very few major problems.
We have a single central version control (similar to Subversion). Anything committed to version control automatically gets deployed. The problem is when we're working on a bigger or riskier change, we can't send it to version control until it's finished. But not committing regularly leads to conflicts and merge issues, and also means we're only tracking the changes with a large granularity and can't revert back to intermediate steps. Plus, version control is the main way we share code. If the changes haven't been sent to version control, it's awkward for other people to get access to them for review or testing. I think the solution would be a distributed version control system like Git or Mercurial where we can have multiple repositories.
I'm looking forward to reading Continuous Delivery although I think the focus is on web applications.
They were asked how frequently they deploy, and they said every two weeks. I was a little surprised. Obviously it's a long way from yearly releases, but to me, every two weeks is not exactly "continuous".
Part of the issue is that "continuous" often gets mixed up with "automated". For example, continuous build or integration systems are as much about the automation as they are about continuous. But the primary goal is the "continuous" part. The automation is a secondary requirement needed to satisfy the primary goal of continuous.
Of course, "continuous" seldom actually means "continuous". It usually means "frequent". Continuous builds might happen nightly, or might be triggered by commits to version control.
Our continuous deployment means daily, Monday to Thursday nights. We don't deploy Friday in case there are problems and we don't have anyone working on the weekends.
Our big nightmare is that something goes wrong with an update and all of our roughly 500 installations will be down. Of course, we have a big test suite that has to pass, but tests are never perfect. To reduce the risk we deploy to about ten "beta" sites first. Everyone else gets that update two days later. Having ten beta sites down is something we can handle, and they're aware they're beta sites so they're a little more understanding. In practice, we've had very few major problems.
We have a single central version control (similar to Subversion). Anything committed to version control automatically gets deployed. The problem is when we're working on a bigger or riskier change, we can't send it to version control until it's finished. But not committing regularly leads to conflicts and merge issues, and also means we're only tracking the changes with a large granularity and can't revert back to intermediate steps. Plus, version control is the main way we share code. If the changes haven't been sent to version control, it's awkward for other people to get access to them for review or testing. I think the solution would be a distributed version control system like Git or Mercurial where we can have multiple repositories.
I'm looking forward to reading Continuous Delivery although I think the focus is on web applications.
Saturday, October 23, 2010
What, Why, How
Say why not what for version control comments, and comments in code. The code itself tells you "what", the useful extra information is "why". Don't say "added a new method", or "changed the xyz method" - that's obvious from the code. Do say, "changes for the xyz feature" or "fixing bug 1023".
Say what not how in names. i.e. interface and variable names. The users shouldn't need to know "how", they just care about "what". You want to be able to change the "how" implementation, without changing the users. Don't call a variable "namesLinkedList", just call it "namesList". It might currently be a linked list, but later you might want to implement it with a different kind of list.
Say what not how in names. i.e. interface and variable names. The users shouldn't need to know "how", they just care about "what". You want to be able to change the "how" implementation, without changing the users. Don't call a variable "namesLinkedList", just call it "namesList". It might currently be a linked list, but later you might want to implement it with a different kind of list.
Sunday, October 17, 2010
jSuneido Compiler Overhaul
Implementing a language to run on the JVM means deciding how to "map" the features of the language onto the features of the JVM.
When I started writing jSuneido it seemed like the obvious way to compile a Suneido class was as a Java class, so that's the direction I took.
The problem is, Suneido doesn't just have classes, it also has standalone functions (outside any class). So I made these Java classes with a single method.
Suneido also has blocks (closures). So I compiled these as additional methods inside the class currently being compiled.
As I gradually implemented Suneido's features, this approach got more and more complex and ugly. It all worked but I wasn't very happy with it. And it became quite fragile, any modification was likely to break things.
So I decided to overhaul the code and take a different approach - compiling each class method, standalone function, or block as a separate Java class with a single method. I just finished this overhaul.
Of course, the end result is never as simple and clean as you envision when you start. It's definitely better, but there are always awkward corners.
Unfortunately, more and more of the improvements I want to make are running into the limitations of single-pass compiling, an approach I carried over from cSuneido. I have a feeling that sooner or later I am going to have to bite the bullet and switch to compiling to an abstract syntax tree (AST) first, and then generate JVM byte code from it. That will open the way for a lot of other optimizations.
When I started writing jSuneido it seemed like the obvious way to compile a Suneido class was as a Java class, so that's the direction I took.
The problem is, Suneido doesn't just have classes, it also has standalone functions (outside any class). So I made these Java classes with a single method.
Suneido also has blocks (closures). So I compiled these as additional methods inside the class currently being compiled.
As I gradually implemented Suneido's features, this approach got more and more complex and ugly. It all worked but I wasn't very happy with it. And it became quite fragile, any modification was likely to break things.
So I decided to overhaul the code and take a different approach - compiling each class method, standalone function, or block as a separate Java class with a single method. I just finished this overhaul.
Of course, the end result is never as simple and clean as you envision when you start. It's definitely better, but there are always awkward corners.
Unfortunately, more and more of the improvements I want to make are running into the limitations of single-pass compiling, an approach I carried over from cSuneido. I have a feeling that sooner or later I am going to have to bite the bullet and switch to compiling to an abstract syntax tree (AST) first, and then generate JVM byte code from it. That will open the way for a lot of other optimizations.
Friday, October 15, 2010
Java + Guava + Windows = Glitches
Some of my jSuneido tests started failing, some of them intermittently, but only on Windows. There were two problems, both related to deleting files.
The first was that deleting a directory in the tear down was failing every time. The test created the directory so I figured it probably wasn't permissions. I could delete the directory from Windows without any problems. The test ran fine in cSuneido.
I copied the Guava methods I was calling into my own class and added debugging. I tracked the problem down to Guava's Files.deleteDirectoryContents which is called by Files.deleteRecursively. It has the following:
// Symbolic links will have different canonical and absolute paths
if (!directory.getCanonicalPath().equals(directory.getAbsolutePath())) {
return;
}
The problem was that getCanonicalPath and getAbsolutePath were returning slightly different values, even though there was no symbolic link involved - one had "jsuneido" and the other had "jSuneido". So the directory contents wasn't deleted so the directory delete failed. From the Windows Explorer and from the command line it was only "jsuneido". I even renamed the directed and renamed it back. I don't know where the upper case version was coming from. It could have been named that way sometime in the past. I suspect the glitch may come from remnants of the old short and long filename handling in Windows, perhaps in combination with the way Java implements these methods on Windows.
I ended up leaving the code copied into my own class with the problem lines removed. Not an ideal solution but I'm not sure what else to do.
My other thought at looking at this Guava code was that if that test was extracted into a separate method called something like isSymbolicLink, then the code would be clearer and they wouldn't need the comment. And that might make it slightly more likely that someone would try to come up with a better implementation.
The other problem was that new RandomAccessFile was failing intermittently when it followed file.delete. My guess is that Windows does some of the file deletion asynchronously and it doesn't always finish in time so the file creation fails because the file exists. The workaround was to do file.createNewFile before new RandomAccessFile. I'm not sure why this solves the problem, you'd think file.createNewFile would have the same problem. Maybe it calls some Windows API function that waits for pending deletes to finish. Again, not an ideal fix, but the best I could come up with.
Neither of these problems showed up on OS X. For the most part Java's write-once-run-anywhere has held true but there are always leaky abstractions.
The first was that deleting a directory in the tear down was failing every time. The test created the directory so I figured it probably wasn't permissions. I could delete the directory from Windows without any problems. The test ran fine in cSuneido.
I copied the Guava methods I was calling into my own class and added debugging. I tracked the problem down to Guava's Files.deleteDirectoryContents which is called by Files.deleteRecursively. It has the following:
// Symbolic links will have different canonical and absolute paths
if (!directory.getCanonicalPath().equals(directory.getAbsolutePath())) {
return;
}
The problem was that getCanonicalPath and getAbsolutePath were returning slightly different values, even though there was no symbolic link involved - one had "jsuneido" and the other had "jSuneido". So the directory contents wasn't deleted so the directory delete failed. From the Windows Explorer and from the command line it was only "jsuneido". I even renamed the directed and renamed it back. I don't know where the upper case version was coming from. It could have been named that way sometime in the past. I suspect the glitch may come from remnants of the old short and long filename handling in Windows, perhaps in combination with the way Java implements these methods on Windows.
I ended up leaving the code copied into my own class with the problem lines removed. Not an ideal solution but I'm not sure what else to do.
My other thought at looking at this Guava code was that if that test was extracted into a separate method called something like isSymbolicLink, then the code would be clearer and they wouldn't need the comment. And that might make it slightly more likely that someone would try to come up with a better implementation.
The other problem was that new RandomAccessFile was failing intermittently when it followed file.delete. My guess is that Windows does some of the file deletion asynchronously and it doesn't always finish in time so the file creation fails because the file exists. The workaround was to do file.createNewFile before new RandomAccessFile. I'm not sure why this solves the problem, you'd think file.createNewFile would have the same problem. Maybe it calls some Windows API function that waits for pending deletes to finish. Again, not an ideal fix, but the best I could come up with.
Neither of these problems showed up on OS X. For the most part Java's write-once-run-anywhere has held true but there are always leaky abstractions.
Subscribe to:
Posts (Atom)