Started out with XO Laptop Hacks. I just got my OLPC XO not that long ago and I haven't had time to do much with it so it was interesting to hear more.
Next I went to the presentation on CouchDB. It's an interesting project - written in Erlang, primary interface JSON via HTTP, started by one guy on his own, but now IBM pays his salary.
I couldn't miss Jeff Jonas's talk on behind the scenes in Las Vegas - he's an entertaining presenter. His talk didn't have as much software content as last year, but it was still good.
Synthetic Neurobiology was largely on the theme of "body hacking". One of their more clever hacks was to introduce genes into neurons (via a virus) that make the cell sensitive to light. You can then trigger the neurons to fire (or suppress them from firing) using light. Wow.
The day (and the conference) ended with more keynotes. First by Alex Steffen of WorldChanging (who I was disappointed to see carrying a disposable plastic bottle of water!) Then a presentation on Twine - yet another social network app, but at least with some interesting semantic web aspects. A brief talk on Digital Democracy about the use of the web (and social networks, of course!) in politics.
And the final talk was by Timothy Ferriss of the (4-Hour Workweek). I'd seen Tim around during the conference, attending talks. My first impression was that he was short. A bit like seeing an famous actor and finding out how short they are. He also blended in pretty well carrying his backpack around, although without the ever present laptop of most people. But when he gave his talk he definitely had a bigger presence and charisma. Unlike every other speaker, he did not have any powerpoint slides - he just talked. If you've read his book and blog it was nothing new. Despite being a wing nut in many ways, I still find him "inspiring" if that's the right word. The gist of his message is that you don't have to be powerlessly overwhelmingly "busy". You can re-engineer your life to do what you want to do.
Friday, March 07, 2008
Wednesday, March 05, 2008
Day 3 at ETech 2008
When I saw the first speaker of the day was some old guy instead of the scheduled Kathy Sierra, I was a little disappointed. The "old guy" turned out to be John McCarthy, the inventor of Lisp and "artificial intelligence". To me it was a pleasant surprise to see someone like this along with all the "tech kiddies". Unfortunately, his talk moved really slowly and they had to cut him off. I felt bad for him.
Kathy Sierra did speak later. She talked about how we all want to be "good" at something and what it takes to do that. Surprisingly, natural ability counts less than sheer focus and concentration. Of course, that requires motivation and it's less clear how to get that. Lately I've been thinking about when I developed Suneido and my focus (perhaps fanaticism would be more accurate). I seldom achieve that kind of focus anymore and I miss it. I just haven't quite figured out how to get it back. Then again, do I want to get it back? Spending the majority of your waking moments working on something isn't exactly a balanced life, no matter how challenging/rewarding/addictive it is.
Looking at today's sessions it seemed like there wasn't that much that was attractive to me. But sometimes it's a good thing to be be forced to go to talks that I might otherwise have skipped. After all, the whole point of the conference (for me) is to get exposed to new ideas.
Brain Imaging and I Sing the Body Electric surprisingly turned out to have a lot of common ground. There's a definite feeling around that hacking humans is the next frontier. I've never been keen on the idea of "hacking" myself, maybe because I know how easy it is to screw up complex systems.
Hackers Built My Motorcycle started out by saying his talk had nothing to do with the title, and sure enough he never mentioned motorcycles. Maybe that was an example of "hacking" the conference schedule. Nevertheless, it was a fun talk. The problem with talks about security, like talks about the environment, is that they are somewhat depressing! He proceeded to explain how easy it is to hack into cell phones, web sites, house locks, RFID credit cards ... scary stuff.
The next talk, about technology in Cuba, had the potential to be quite interesting but the two speakers read a pre-written speech and that seldom works well. I've really found that a good presenter is worth going to regardless of the topic, and no matter how interesting the topic, a bad presenter will kill it.
My final session was on OpenCV, an open source computer vision library. I don't know much about this area but it was pretty amazing what's possible these days.
Kathy Sierra did speak later. She talked about how we all want to be "good" at something and what it takes to do that. Surprisingly, natural ability counts less than sheer focus and concentration. Of course, that requires motivation and it's less clear how to get that. Lately I've been thinking about when I developed Suneido and my focus (perhaps fanaticism would be more accurate). I seldom achieve that kind of focus anymore and I miss it. I just haven't quite figured out how to get it back. Then again, do I want to get it back? Spending the majority of your waking moments working on something isn't exactly a balanced life, no matter how challenging/rewarding/addictive it is.
Looking at today's sessions it seemed like there wasn't that much that was attractive to me. But sometimes it's a good thing to be be forced to go to talks that I might otherwise have skipped. After all, the whole point of the conference (for me) is to get exposed to new ideas.
Brain Imaging and I Sing the Body Electric surprisingly turned out to have a lot of common ground. There's a definite feeling around that hacking humans is the next frontier. I've never been keen on the idea of "hacking" myself, maybe because I know how easy it is to screw up complex systems.
Hackers Built My Motorcycle started out by saying his talk had nothing to do with the title, and sure enough he never mentioned motorcycles. Maybe that was an example of "hacking" the conference schedule. Nevertheless, it was a fun talk. The problem with talks about security, like talks about the environment, is that they are somewhat depressing! He proceeded to explain how easy it is to hack into cell phones, web sites, house locks, RFID credit cards ... scary stuff.
The next talk, about technology in Cuba, had the potential to be quite interesting but the two speakers read a pre-written speech and that seldom works well. I've really found that a good presenter is worth going to regardless of the topic, and no matter how interesting the topic, a bad presenter will kill it.
My final session was on OpenCV, an open source computer vision library. I don't know much about this area but it was pretty amazing what's possible these days.
Tuesday, March 04, 2008
Day 2 at ETech 2008
Saul Griffith's talk on Energy Literacy was well done but pretty depressing. I can't dredge up much optimism that a) people will massively reduce their energy use, and b) we'll shut down much of our fossil fuel energy production and replace it via a massive construction of new green energy sources. I can't see it happening. People won't take such drastic action until there's a crisis. So it was further depressing to hear that the time lag on carbon reduction affects can be hundreds of years. i.e. By the time there's a crisis it'll be way too late to do anything. Of course, it's a contentious subject. Someone at my lunch table argued that technology would save us. Hmmm ... maybe it'll save us humans (we're good at that) but what about the rest of our ecosystem? We don't have near as good a track record there.
The presentation by MegaPhone on Collaborative Gaming in Public Spaces was entertaining. The two presenters were the founders and looked to be still in their teens! They develop ways for people to interact with public video displays, primarily by cell phone. They ended up in the debugger on the big screen a few times, but it seemed to work in the end. I'm not sure how they got to be keynote speakers but it was refreshingly "innocent".
The session on the Future of Mind Hacks was pretty interesting. You always wonder if you've picked the right session, but when Timothy Ferriss is a couple of rows ahead and Tim O'Reilly is a couple of rows behind I figure I've picked a good one.
Two Microsoft employees were sitting beside me, one had a MacBook, the other an iPhone (and worked in the mobile division). Hmmm ... is that getting to know the competition?
After the keynotes I went to Tap is the New Click on gestural interfaces (e.g. the iPhone). It was ok, but not great, mostly just a dry overview.
At lunch I counted 10 tables with a single person at them (one of them was mine). Good to know I'm not the only anti-social geek. But most geeks must be more social electronically than me. One of the big topics is social applications. Frankly, I don't have enough friends to need an application to keep track of them. Then again, I don't even have a cell phone so I'm obviously abnormal.
After lunch I went to Green Nano by an HP researcher. I've been excited by nanotech since Eric Drexler's Engines of Creation so it was nice to hear about progress. But the talk seemed a little dry.
For some variety I next went to a talk on Digital Activism by Ethan Zuckerman. It turned out to be pretty interesting. It's good to hear about some "positive" uses of technology.
Then DIY Drones by Chris Anderson (of Long Tail fame) - fun.
And finally, Personal Productivity by Gina Trapani of LifeHacker, another good talk. (Of course, Timothy Ferriss was at this one as well.)
All in all, a pretty good day.
The presentation by MegaPhone on Collaborative Gaming in Public Spaces was entertaining. The two presenters were the founders and looked to be still in their teens! They develop ways for people to interact with public video displays, primarily by cell phone. They ended up in the debugger on the big screen a few times, but it seemed to work in the end. I'm not sure how they got to be keynote speakers but it was refreshingly "innocent".
The session on the Future of Mind Hacks was pretty interesting. You always wonder if you've picked the right session, but when Timothy Ferriss is a couple of rows ahead and Tim O'Reilly is a couple of rows behind I figure I've picked a good one.
Two Microsoft employees were sitting beside me, one had a MacBook, the other an iPhone (and worked in the mobile division). Hmmm ... is that getting to know the competition?
After the keynotes I went to Tap is the New Click on gestural interfaces (e.g. the iPhone). It was ok, but not great, mostly just a dry overview.
At lunch I counted 10 tables with a single person at them (one of them was mine). Good to know I'm not the only anti-social geek. But most geeks must be more social electronically than me. One of the big topics is social applications. Frankly, I don't have enough friends to need an application to keep track of them. Then again, I don't even have a cell phone so I'm obviously abnormal.
After lunch I went to Green Nano by an HP researcher. I've been excited by nanotech since Eric Drexler's Engines of Creation so it was nice to hear about progress. But the talk seemed a little dry.
For some variety I next went to a talk on Digital Activism by Ethan Zuckerman. It turned out to be pretty interesting. It's good to hear about some "positive" uses of technology.
Then DIY Drones by Chris Anderson (of Long Tail fame) - fun.
And finally, Personal Productivity by Gina Trapani of LifeHacker, another good talk. (Of course, Timothy Ferriss was at this one as well.)
All in all, a pretty good day.
Monday, March 03, 2008
Day 1 at ETech 2008
The first day at ETech is tutorials.
First up was Live, Vast and Deep: Web-native Information Visualization by Tom Carden of Stamen Design. Although it was pretty high level and didn't get into too many details, there were lots of thought provoking examples and links to things to investigate - and that's what I'm looking for.
It was a tough choice between this and Storyboarding for Nonfiction by Kathy Sierra (or Creating Passionate Users). I decided that I might get more useful ideas (for me) from visualization. But I bet Kathy's talk was good too.
In the afternoon I went to Debugging Hacks: What They Never Taught You About Solving Hard Bugs by Marc Hedlund of Wesabe. Nothing really new but a good talk on how to solve hard bugs. One point that resonated with me: "the goal is not to suppress the symptoms, it's to understand the problem". I have this discussion with my own programmers on occasion - removing an assert is not "solving" the problem! And conversely, adding the assert did not "cause" the problem.
First up was Live, Vast and Deep: Web-native Information Visualization by Tom Carden of Stamen Design. Although it was pretty high level and didn't get into too many details, there were lots of thought provoking examples and links to things to investigate - and that's what I'm looking for.
It was a tough choice between this and Storyboarding for Nonfiction by Kathy Sierra (or Creating Passionate Users). I decided that I might get more useful ideas (for me) from visualization. But I bet Kathy's talk was good too.
In the afternoon I went to Debugging Hacks: What They Never Taught You About Solving Hard Bugs by Marc Hedlund of Wesabe. Nothing really new but a good talk on how to solve hard bugs. One point that resonated with me: "the goal is not to suppress the symptoms, it's to understand the problem". I have this discussion with my own programmers on occasion - removing an assert is not "solving" the problem! And conversely, adding the assert did not "cause" the problem.
Saturday, March 01, 2008
San Diego ETech
I'll be in San Diego this week for ETech.
Drop me an email if you're in the area and interested in meeting up.
Drop me an email if you're in the area and interested in meeting up.
Wednesday, February 27, 2008
Keep it Simple
37signals response to an article about them in Wired.
I don't necessarily agree with all of 37signal's philosophy but it's nice to see someone fighting against complexity and for simplicity.
I don't necessarily agree with all of 37signal's philosophy but it's nice to see someone fighting against complexity and for simplicity.
Tuesday, February 26, 2008
The Future is Free?
An interesting article by Chris Anderson (also know for the "long tail") in Wired on how and why more and more things are "free".
Sunday, February 24, 2008
Skype, Headsets, and Bluetooth
I'm going to be spending some time away from home without Shelley so I thought I should set up Skype so we could talk. It was no problem to download and install the software on my MacBook and on Shelley's Windows XP PC.
I bought a USB headset for the PC but I wanted something smaller for the MacBook and since it has Bluetooth I figured I could get a Bluetooth headset. I couldn't find a computer specific bluetooth headset, only cell phone ones. But would they work with the Mac? I did some research on the web and read about lots of problems, but mostly old issues that were supposedly fixed with Leopard. I didn't even bother trying to ask a clerk at Staples which bluetooth headsets worked with Skype on OS X on a MacBook. Although I guess if you were lucky you might get some kid who was an expert on the issue.
I chose a Bluetrek Tattoo headset, more or less at random. I charged it up, and managed to pair it with the MacBook, but I couldn't get any sound in or out of it. I couldn't even tell if it was "on" or not. It was supposed to have a green light when it was switched on but I got no lights. I played around for a while but it just seemed to be dead. I took it back and picked a Motorola H800, again for no particular reason. This time it worked fine, no problems. I still don't know if the Tattoo is incompatible, or if I just got a dud.
So now I have Skype working. The sound quality of the "echo" test call wasn't great but it says it's going to the UK so that might be part of the reason.
Of course, what I paid for the two headsets would have paid for more than enough regular long distance phone calls, but what would be the fun in that!
I bought a USB headset for the PC but I wanted something smaller for the MacBook and since it has Bluetooth I figured I could get a Bluetooth headset. I couldn't find a computer specific bluetooth headset, only cell phone ones. But would they work with the Mac? I did some research on the web and read about lots of problems, but mostly old issues that were supposedly fixed with Leopard. I didn't even bother trying to ask a clerk at Staples which bluetooth headsets worked with Skype on OS X on a MacBook. Although I guess if you were lucky you might get some kid who was an expert on the issue.
I chose a Bluetrek Tattoo headset, more or less at random. I charged it up, and managed to pair it with the MacBook, but I couldn't get any sound in or out of it. I couldn't even tell if it was "on" or not. It was supposed to have a green light when it was switched on but I got no lights. I played around for a while but it just seemed to be dead. I took it back and picked a Motorola H800, again for no particular reason. This time it worked fine, no problems. I still don't know if the Tattoo is incompatible, or if I just got a dud.
So now I have Skype working. The sound quality of the "echo" test call wasn't great but it says it's going to the UK so that might be part of the reason.
Of course, what I paid for the two headsets would have paid for more than enough regular long distance phone calls, but what would be the fun in that!
Tuesday, February 19, 2008
Suneido Build Frustrations
I made a minor improvement to the Suneido source code this morning, ran make, no problems, built-in tests ran successfully.
But when I tried to use the new executable I got an obscure database error. What's going on?
I had built with MinGW so I switched to Visual C++. Exact same error!?
This was at home and my last builds had been at work but I can't see why that would matter.
Remove all the object files and build from scratch. No good, same problem.
Check version control to see what I'd changed lately - very little, and nothing that seemed related to the error.
The error is from the database btree code. Maybe the database is corrupted. But all the exe's, old and new, say the database is fine.
Try creating a new database with just the standard library. Now I get a different error related to the Scintilla source code editing component.
Build a MinGW debug version and run it under GDB. That gives me a clue to what query is leading to the error. It had appeared to be outputting to the database, which seemed odd for start up, but it was actually building a temporary index for a query that it was reading. Although that query in the old working executable doesn't require a temporary index.
Turn on the query tracing at the start of the standard library Init to see where the query is coming from. It's loading the plugins.
Aha, that's why the database with only stdlib gets a different error - because it only has to look for plugins in a single library and therefore no temporary index. Yeah, if I disable the plugin loading then I get the other error.
Two unresolved questions
- why the temporary index in the new builds but not the older build?
- why the later UI error?
And how are these two questions related? (assuming a single cause) It seems like it would have to be something low level, like the garbage collector, to affect such unrelated areas.
Of course, it could be something like an uninitialized variable that happens to get a different value on my home system. But it seems too consistent for that. And something like would likely have been encountered before now.
What is different between my work and home machines? I call the office and have them install LogMeIn on my work computer so I can access it. I try building with MinGW and it works fine. The exe is a different size though. Something is different.
Transfer the home exe to work to see if it's the environment. Same error message, so at least it's not because of my Vista on Parallels on Mac setup at home.
md5sum the files at work and at home and compare. The only real difference is the change I made this morning.
But ... that couldn't be the problem could it? Revert that file and build.
Oh no, this is really embarrassing! It works. The problem was the most obvious first place I should have looked - the change I just made.
I'm really tempted not to post this - it just makes me look stupid.
Why did I go off on a wild goose chase? I guess because the error seemed to be so totally unrelated to my change, and I hadn't built for a while so it seemed likely that there could be a problem. And the change I did seemed trivial so I didn't suspect it. And because it seemed trivial I didn't write any tests. (The bug was also obvious, once I looked for it.)
Ouch. There goes a few hours down the drain. Maybe I learned a lesson, but sadly it's one I should have learned a long time ago.
But when I tried to use the new executable I got an obscure database error. What's going on?
I had built with MinGW so I switched to Visual C++. Exact same error!?
This was at home and my last builds had been at work but I can't see why that would matter.
Remove all the object files and build from scratch. No good, same problem.
Check version control to see what I'd changed lately - very little, and nothing that seemed related to the error.
The error is from the database btree code. Maybe the database is corrupted. But all the exe's, old and new, say the database is fine.
Try creating a new database with just the standard library. Now I get a different error related to the Scintilla source code editing component.
Build a MinGW debug version and run it under GDB. That gives me a clue to what query is leading to the error. It had appeared to be outputting to the database, which seemed odd for start up, but it was actually building a temporary index for a query that it was reading. Although that query in the old working executable doesn't require a temporary index.
Turn on the query tracing at the start of the standard library Init to see where the query is coming from. It's loading the plugins.
Aha, that's why the database with only stdlib gets a different error - because it only has to look for plugins in a single library and therefore no temporary index. Yeah, if I disable the plugin loading then I get the other error.
Two unresolved questions
- why the temporary index in the new builds but not the older build?
- why the later UI error?
And how are these two questions related? (assuming a single cause) It seems like it would have to be something low level, like the garbage collector, to affect such unrelated areas.
Of course, it could be something like an uninitialized variable that happens to get a different value on my home system. But it seems too consistent for that. And something like would likely have been encountered before now.
What is different between my work and home machines? I call the office and have them install LogMeIn on my work computer so I can access it. I try building with MinGW and it works fine. The exe is a different size though. Something is different.
Transfer the home exe to work to see if it's the environment. Same error message, so at least it's not because of my Vista on Parallels on Mac setup at home.
md5sum the files at work and at home and compare. The only real difference is the change I made this morning.
But ... that couldn't be the problem could it? Revert that file and build.
Oh no, this is really embarrassing! It works. The problem was the most obvious first place I should have looked - the change I just made.
I'm really tempted not to post this - it just makes me look stupid.
Why did I go off on a wild goose chase? I guess because the error seemed to be so totally unrelated to my change, and I hadn't built for a while so it seemed likely that there could be a problem. And the change I did seemed trivial so I didn't suspect it. And because it seemed trivial I didn't write any tests. (The bug was also obvious, once I looked for it.)
Ouch. There goes a few hours down the drain. Maybe I learned a lesson, but sadly it's one I should have learned a long time ago.
Monday, February 18, 2008
ZENN and the art of slow progress
One of my recurring complaints is gadgets that are only available in the US. But at least it makes a certain amount of sense when they're made in the US. But here's an example where it's made in Canada and it's still only available in the US!
Saturday, February 16, 2008
Creating Passionate Users Video
From the O'Reilly Tools of Change for Publishing conference. I like Kathy Sierra's Head First computer books and was a fan of her blog.
Wednesday, February 13, 2008
VirtualBox
In Tim Bray's ongoing blog he mentions Sun's acquisition of Innotek, the developers of VirtualBox - virtualization software for Windows, Linux, and OS X. I haven't tried it, but it looks interesting - and free for personal use.
Saturday, February 09, 2008
Catching Up
I've been traveling in Ecuador for the last month and although I managed to check my email and update my personal blog periodically I haven't been keeping up on news in the computer world.
One big acquisition was Sun buying MySQL. Sun seems to have done ok with Open Office, hopefully things will work out ok with MySQL.
And Microsoft is trying to buy Yahoo. I find this a little scary. I wonder what would happen to things like Flickr. It seems unlikely that Microsoft could manage to stay "hands off".
I knew Apple would be announcing new stuff while I was gone and I kept meaning to check it out. It didn't turn out to be anything too exciting. A cool new smaller laptop, but since I bought a MacBook not that long ago, that makes me annoyed more than anything! They also announced movie rentals which is interesting. It makes an Apple TV more attractive.
Of course, then I found out movie rentals are US only. Argh! I guess I shouldn't complain. I have no desire to move to the US and Canada has better access to new technology than many/most countries. But this "US only" thing seems to be getting more common. I just heard that Bug Labs new product (that I've been waiting for) is also initially US only.
Sometimes I enjoy the relentless rush of new hi-tech products, other times it's annoying. I buy a MacBook, they announce the thinner, lighter MacBook Air. I buy a Pentax K10, they announce the K20. In most cases I don't need the new product, the old one doesn't suddenly stop working. But it's hard to stop yourself from thinking "if I'd just waited a few months". But you know that doesn't work, the treadmill doesn't stop. Of course, when you're waiting for a new product or feature then it seems to take forever to arrive.
My One Laptop Per Child OLPC XO arrived while I was gone.


It took me a while just to figure out how to open it. I was trying to use the latches on the bottom but they're for the battery. You have to open the antennas to unlatch it. It powered up ok but the user interface was pretty cryptic at first. I kept trying to click by tapping on the touch pad (like I'm used to on other laptops) but that doesn't work. My next problem was that the left "mouse" button is labeled with an "X" which I automatically associate with "close". So I kept trying to use the right button which is labeled with "O". I should have ignored the labels - the left and right buttons work like other systems. Once I got past these roadblocks it got easier.
The keyboard is too small to touch type on with adult hands but I was expecting that. The screen is nice and is quite readable in direct light with the backlight turned off - this saves power and also lets you use it in bright sunlight. I managed to connect to my home wireless network without too much trouble (had to pick hex and shared key) and was able to browse the internet and check my email. The built in camera seems to work fine.
The only real problem I've run into is that I get two CRC errors when I boot up. I'm guessing they're from the hard drive. So far they don't seem to be causing any trouble.
I don't have a particular use in mind for the XO, it's just interesting to see what they've come up with after hearing about the project for so long.
Also waiting for me were the new books by Christopher Alexander (of design patterns fame). Now I just have to find time to read four hefty volumes. Lots of pictures at least :-)
One big acquisition was Sun buying MySQL. Sun seems to have done ok with Open Office, hopefully things will work out ok with MySQL.
And Microsoft is trying to buy Yahoo. I find this a little scary. I wonder what would happen to things like Flickr. It seems unlikely that Microsoft could manage to stay "hands off".
I knew Apple would be announcing new stuff while I was gone and I kept meaning to check it out. It didn't turn out to be anything too exciting. A cool new smaller laptop, but since I bought a MacBook not that long ago, that makes me annoyed more than anything! They also announced movie rentals which is interesting. It makes an Apple TV more attractive.
Of course, then I found out movie rentals are US only. Argh! I guess I shouldn't complain. I have no desire to move to the US and Canada has better access to new technology than many/most countries. But this "US only" thing seems to be getting more common. I just heard that Bug Labs new product (that I've been waiting for) is also initially US only.
Sometimes I enjoy the relentless rush of new hi-tech products, other times it's annoying. I buy a MacBook, they announce the thinner, lighter MacBook Air. I buy a Pentax K10, they announce the K20. In most cases I don't need the new product, the old one doesn't suddenly stop working. But it's hard to stop yourself from thinking "if I'd just waited a few months". But you know that doesn't work, the treadmill doesn't stop. Of course, when you're waiting for a new product or feature then it seems to take forever to arrive.
My One Laptop Per Child OLPC XO arrived while I was gone.


It took me a while just to figure out how to open it. I was trying to use the latches on the bottom but they're for the battery. You have to open the antennas to unlatch it. It powered up ok but the user interface was pretty cryptic at first. I kept trying to click by tapping on the touch pad (like I'm used to on other laptops) but that doesn't work. My next problem was that the left "mouse" button is labeled with an "X" which I automatically associate with "close". So I kept trying to use the right button which is labeled with "O". I should have ignored the labels - the left and right buttons work like other systems. Once I got past these roadblocks it got easier.The keyboard is too small to touch type on with adult hands but I was expecting that. The screen is nice and is quite readable in direct light with the backlight turned off - this saves power and also lets you use it in bright sunlight. I managed to connect to my home wireless network without too much trouble (had to pick hex and shared key) and was able to browse the internet and check my email. The built in camera seems to work fine.
The only real problem I've run into is that I get two CRC errors when I boot up. I'm guessing they're from the hard drive. So far they don't seem to be causing any trouble.
I don't have a particular use in mind for the XO, it's just interesting to see what they've come up with after hearing about the project for so long.
Wednesday, January 02, 2008
A Programming Hierarchy of Needs
After I wrote my last post - There's More to Software Design I was thinking about why so many programmers don't concern themselves with issues like style, grace, and elegance.
It occurred to me that part of the explanation might be something like Maslow's hierarchy of needs. Maslow theorized that people have a hierarchy of needs and that higher levels only come into play after lower levels are satisfied. The lower levels are things like food and shelter; the higher levels are things like morality and creativity. e.g. oversimplifying, starving people don't worry as much about morality
The analogy is probably obvious. The lower level of programming needs would be to get the code to [appear to] work. The inserted "appear to" is important. Without it, "getting the code to work" could be everything up to and including formal verification. Whereas I'm talking about most novice programmer's concept of "getting it working" which is to pass a few random tests.
Things like automated tests, test driven development, and refactoring would be higher up the hierarchy. (Or the equivalent from your methodology of choice.) And things like style, grace, and elegance would be near the top.
And the reason most programmers don't get to the top levels is that they're still struggling with the lower levels. That's not really their fault, either. Programming is unquestionably hard. Just to get something to work in the weak sense can be more than enough challenge.
Note: Just because I think about the higher levels doesn't mean I'm more "advanced". I'm still struggling along at the lower levels with my own code. I just like to dream :-)
It occurred to me that part of the explanation might be something like Maslow's hierarchy of needs. Maslow theorized that people have a hierarchy of needs and that higher levels only come into play after lower levels are satisfied. The lower levels are things like food and shelter; the higher levels are things like morality and creativity. e.g. oversimplifying, starving people don't worry as much about morality
The analogy is probably obvious. The lower level of programming needs would be to get the code to [appear to] work. The inserted "appear to" is important. Without it, "getting the code to work" could be everything up to and including formal verification. Whereas I'm talking about most novice programmer's concept of "getting it working" which is to pass a few random tests.
Things like automated tests, test driven development, and refactoring would be higher up the hierarchy. (Or the equivalent from your methodology of choice.) And things like style, grace, and elegance would be near the top.
And the reason most programmers don't get to the top levels is that they're still struggling with the lower levels. That's not really their fault, either. Programming is unquestionably hard. Just to get something to work in the weak sense can be more than enough challenge.
Note: Just because I think about the higher levels doesn't mean I'm more "advanced". I'm still struggling along at the lower levels with my own code. I just like to dream :-)
Monday, December 31, 2007
There's More to Software Design
There's more to software design than just the "mechanical" aspects.
This article by Mark Hamburg about Lightroom's Goals should give you an idea of what I'm talking about. (And I think they've been fairly successful with this in Lightroom.)
I struggle with this with my company's vertical application, partly because it's hard to get people to see that issues like style, grace, and elegance are relevant to a business application. I think they are. I don't mean it has to be "pretty" or "artsy". But it should look good, flow well, be smooth not awkward. Part of this is definitely the mechanical aspects but part of it is more subtle things. I'm reminded of "quality" in Zen and the Art of Motorcycle Maintenance.
This article by Mark Hamburg about Lightroom's Goals should give you an idea of what I'm talking about. (And I think they've been fairly successful with this in Lightroom.)
I struggle with this with my company's vertical application, partly because it's hard to get people to see that issues like style, grace, and elegance are relevant to a business application. I think they are. I don't mean it has to be "pretty" or "artsy". But it should look good, flow well, be smooth not awkward. Part of this is definitely the mechanical aspects but part of it is more subtle things. I'm reminded of "quality" in Zen and the Art of Motorcycle Maintenance.
Saturday, December 29, 2007
Still Learning
Even though I created Suneido and have used it pretty heavily for years, I still find myself learning better ways to apply it. (I think that's part of the reason I like programming.)
Suneido uses the open source Scintilla editor. ScintillaControl is the "wrapper" that interfaces Scintilla to Suneido's user interface framework. I needed to add a new method to it today:
I noticed I had a lot of these methods and I started wondering whether there wasn't a better alternative to adding so many simple repetitious methods. Ruby, and especially Rails, which I've been involved with on another project, make heavy use of "catching" calls to missing methods and implementing them.
I was able to replace all these simple methods with:
"Default" is Suneido's way to "catch" calls to missing methods.
Then I realized I could generalize it to handle methods with arguments:
"@args" is used to capture all the arguments and then pass them again. args[0] will be the method name.
This allowed me to remove a bunch of methods and I won't have to add any more in the future.
In addition, I noticed I had a lot of calls like .SendMessage(SCI.GETLINECOUNT) within the wrapper code. These could now be simplified to be like: .GetLineCount()
This would all fall into the category of "refactoring" since I'm improving the code without changing its behavior. (Strictly speaking, the behavior has changed slightly, but not in a way that should affect existing code unless someone is doing something unusual.)
I guess you'd call this refactoring something like: "Replace explicit methods with catching missing method calls."
Suneido uses the open source Scintilla editor. ScintillaControl is the "wrapper" that interfaces Scintilla to Suneido's user interface framework. I needed to add a new method to it today:
LineEndExtend()
{ .SendMessage(SCI.LINEENDEXTEND) }
I noticed I had a lot of these methods and I started wondering whether there wasn't a better alternative to adding so many simple repetitious methods. Ruby, and especially Rails, which I've been involved with on another project, make heavy use of "catching" calls to missing methods and implementing them.
I was able to replace all these simple methods with:
Default(method)
{
f = method.Upper()
if not SCI.Member?(f)
throw "method not found: " $ method
return .SendMessage(SCI[f])
}
"Default" is Suneido's way to "catch" calls to missing methods.
Then I realized I could generalize it to handle methods with arguments:
Default(@args)
{
f = args[0].Upper()
if not SCI.Member?(f)
throw "method not found: " $ args[0]
args[0] = SCI[f]
return .SendMessage(@args)
}
"@args" is used to capture all the arguments and then pass them again. args[0] will be the method name.
This allowed me to remove a bunch of methods and I won't have to add any more in the future.
In addition, I noticed I had a lot of calls like .SendMessage(SCI.GETLINECOUNT) within the wrapper code. These could now be simplified to be like: .GetLineCount()
This would all fall into the category of "refactoring" since I'm improving the code without changing its behavior. (Strictly speaking, the behavior has changed slightly, but not in a way that should affect existing code unless someone is doing something unusual.)
I guess you'd call this refactoring something like: "Replace explicit methods with catching missing method calls."
Tuesday, December 25, 2007
More on Scratch
A few comments on Scratch:
I'd really like to be able to browse the code for the projects on the web site. (Unless there's some way I missed.) You can download the projects and presumably see the code that way but that's a bunch more steps and not very good for exploring. Since there isn't much documentation, it would be helpful to quickly look at other people's code. It doesn't seem like this would be hard to add.
Apart from the convenience, I think this is important for deeper reasons. Programming, and thinking "like a programmer" are as much or more about reading code as writing it. Seeing other people's results can give you ideas and inspire, but seeing how they did it is going to be a huge benefit too.
A suggestion for Scratch itself is to get rid of the traditional open/save file management. Alan Cooper in About Face 3 makes a good case for why open/save sucks. I never have to "open" or "save" in Lightroom. Gmail and Blogger save automatically. I don't have to pick/navigate to a directory in Google Docs. In a product for kids especially, you could avoid a bunch of issues by saving automatically to a standard location.
Finally, it's too bad Scratch is so rigid with respect to screen/window sizes. I can understand why they did it that way - it's a lot simpler than trying to use vector or higher resolution images and make things resizable. And for educational purposes maybe it doesn't matter. (Although I notice a number of people wanting to run it 800x600.) Nonetheless, it was a bit disappointing when I ran my program full screen for the first time and got a jagged grainy image (as a result of simple resizing of the low resolution stage). Maybe I'm just spoiled by things like Mac OS X's resizable icons.
I'd really like to be able to browse the code for the projects on the web site. (Unless there's some way I missed.) You can download the projects and presumably see the code that way but that's a bunch more steps and not very good for exploring. Since there isn't much documentation, it would be helpful to quickly look at other people's code. It doesn't seem like this would be hard to add.
Apart from the convenience, I think this is important for deeper reasons. Programming, and thinking "like a programmer" are as much or more about reading code as writing it. Seeing other people's results can give you ideas and inspire, but seeing how they did it is going to be a huge benefit too.
A suggestion for Scratch itself is to get rid of the traditional open/save file management. Alan Cooper in About Face 3 makes a good case for why open/save sucks. I never have to "open" or "save" in Lightroom. Gmail and Blogger save automatically. I don't have to pick/navigate to a directory in Google Docs. In a product for kids especially, you could avoid a bunch of issues by saving automatically to a standard location.
Finally, it's too bad Scratch is so rigid with respect to screen/window sizes. I can understand why they did it that way - it's a lot simpler than trying to use vector or higher resolution images and make things resizable. And for educational purposes maybe it doesn't matter. (Although I notice a number of people wanting to run it 800x600.) Nonetheless, it was a bit disappointing when I ran my program full screen for the first time and got a jagged grainy image (as a result of simple resizing of the low resolution stage). Maybe I'm just spoiled by things like Mac OS X's resizable icons.
Monday, December 24, 2007
Something Fun for Christmas
I recently discovered Scratch a programming system for kids, something like Logo.
I decided since it was Christmas I should do something fun and try it out. Here is my first "program":
I decided since it was Christmas I should do something fun and try it out. Here is my first "program":
Sunday, December 23, 2007
Ubuntu Networking Resolved
This really shouldn't have taken so long. It wasn't even that difficult. But when you only spend a few minutes on something and only every few days or weeks, what do you expect! And the issues with Parallels and Leopard didn't help.
As Larry suggested, the "expert" solution was to edit /etc/network/interfaces and change:
As he also suggested, there is a way to do this from the GUI. When I went to System > Administration > Network I saw this:

[Notice the title bar says "Network Settings" although the menu option was just "Network". I always give my programmers heck for that kind of inconsistency.]
"Roaming mode" ??? I selected Wired Connection, clicked on properties, and changed it to:

[Yet more inconsistencies - I selected "Wired Connection" but I got "eth0".]
i.e. un-checked roaming mode and picked DHCP.
This has a similar effect to the "expert" method, adding a line to /etc/network/interface:
Now when I reboot I still have a network connection. The tooltip on the network icon now says "Manual network configuration" which doesn't seem quite right to me - DHCP is pretty automatic. But I guess it's more "manual" than "roaming mode" (whatever that is).
I feel a little stupid at not having sorted this out myself right from the start but you can't win 'em all, I guess. Thanks Larry!
As Larry suggested, the "expert" solution was to edit /etc/network/interfaces and change:
#iface eth0 inet dhcpto:
iface eth0 inet dhcpi.e. uncomment it.
As he also suggested, there is a way to do this from the GUI. When I went to System > Administration > Network I saw this:

[Notice the title bar says "Network Settings" although the menu option was just "Network". I always give my programmers heck for that kind of inconsistency.]
"Roaming mode" ??? I selected Wired Connection, clicked on properties, and changed it to:
[Yet more inconsistencies - I selected "Wired Connection" but I got "eth0".]
i.e. un-checked roaming mode and picked DHCP.
This has a similar effect to the "expert" method, adding a line to /etc/network/interface:
iface eth0 inet dhcpI can see roaming mode might be a good choice for laptops, but it seems odd that it installed this way. Maybe something in Parallels makes Ubuntu think it doesn't have a regular wired connection. It would be nice if the network icon options at the top of the screen included an option to "save" your choice of wired networking (or just did it automatically).
Now when I reboot I still have a network connection. The tooltip on the network icon now says "Manual network configuration" which doesn't seem quite right to me - DHCP is pretty automatic. But I guess it's more "manual" than "roaming mode" (whatever that is).
I feel a little stupid at not having sorted this out myself right from the start but you can't win 'em all, I guess. Thanks Larry!
Leopard Falters
I spoke too soon about no problems with Leopard. I forgot one major part of my setup - my Epson R1800 wide format photo printer.
I went to print a photo for a Christmas present and found ... no printer. Installing Leopard had silently removed my Epson printer driver. (The CUPS + Gutenprint driver was still there, but I only use it to handle printing from Windows under Parallels.)
I can see a driver not being compatible with a new version of an operating system, but to just silently remove it seems pretty lame. Ideally it would warn you at the start of the install so you had a chance to abort the upgrade if you wanted. At the least it could notify you that it had removed your printer!
Luckily, I had waited long enough to upgrade that Epson had released new drivers. (They were released on Dec. 18 - if I had upgraded a week earlier I'd have been screwed.)
All's well that ends well - my printer is working again and I got my Christmas present done :-)
I went to print a photo for a Christmas present and found ... no printer. Installing Leopard had silently removed my Epson printer driver. (The CUPS + Gutenprint driver was still there, but I only use it to handle printing from Windows under Parallels.)
I can see a driver not being compatible with a new version of an operating system, but to just silently remove it seems pretty lame. Ideally it would warn you at the start of the install so you had a chance to abort the upgrade if you wanted. At the least it could notify you that it had removed your printer!
Luckily, I had waited long enough to upgrade that Epson had released new drivers. (They were released on Dec. 18 - if I had upgraded a week earlier I'd have been screwed.)
All's well that ends well - my printer is working again and I got my Christmas present done :-)
Saturday, December 22, 2007
OpenOffice.org2GoogleDocs
An Open Office extension to transfer documents to and from Google Docs.
I haven't tried it yet, but it sounds useful.
I haven't tried it yet, but it sounds useful.
Thursday, December 20, 2007
A Successful Leap for Leopard
I upgraded my MacBook to Leopard a while ago but I waited to upgrade my main MacMini.
With Leopard updates for my main apps (Parallels and Lightroom) I decided it was time to take the plunge. The upgrade went smoothly, although it seemed pretty slow - several hours. I'm not sure why it takes that long.
As a safety precaution I used SuperDuper to backup each machine before upgrading.
So far I haven't had any major problems. The first time I started Parallels I got the following error:

I found a blog post which said the MacFUSE included in Parallels is old and suggested installing the latest MacFuse. This seemed to do the trick, but:
Now that Spotlight with Leopard lets you run the top application match by hitting enter, I was able to uninstall Google Desktop. (Nothing against Google Desktop, I just prefer to keep things simple if I can) (see my previous post)
Leopard also seems to have solved the issue of automatically mounting network drives. (see my previous post) so I was able to remove the login script I had created to do this, which was nice because it took a long time (why?) and slowed down logging in.
I do have a new complaint about OS X. The Finder doesn't have an option to show hidden files. I can understand hiding them by default, so does Windows. But at least Windows gives you a way to show them. This came up when I went to copy the .svn folder from a backup. It is possible to change Finder via a command line, but on top of not being user friendly, this also requires restarting Finder. This seems like an obvious weak point. Is there someone in Apple who refuses to recognize that you might occasionally want to see these files?
I'm still having problems with accessing my 4gb USB thumb drive from Parallels. At first I blamed this on the U3 software that came installed on it, but I removed this and reformatted and I'm still having problems. It works fine on my Windows machine at work. My current guess is that Parallels doesn't quite handle 4gb USB drives. The strange part is that it works fine, but after a short time it will hang during copying from it, and Windows Explorer can no longer access it. My 1gb USB thumb drive continues to work fine.
Ok, back to Ubuntu on Parallels. I copied the virtual machine that I had created on my MacBook over to my MacMini and started it up. No display problem, but the same problem with the Parallels Tool cd image showing garbled file names. Strangely I can't find anybody else with this particular problem. While flailing a bit more I rebooted the VM and lo and behold the Parallels Tools cd image had the right file names. I installed them and restarted X windows. It appears to work! One of the most noticeable features is being able to move the mouse seamlessly between OS X and the VM. I followed the same process on the MacBook and it also worked (although I could have sworn I tried rebooting before). So I appear to be back in business with Ubuntu (albeit starting from scratch with a new VM).
All in all, a successful day!
With Leopard updates for my main apps (Parallels and Lightroom) I decided it was time to take the plunge. The upgrade went smoothly, although it seemed pretty slow - several hours. I'm not sure why it takes that long.
As a safety precaution I used SuperDuper to backup each machine before upgrading.
So far I haven't had any major problems. The first time I started Parallels I got the following error:

I found a blog post which said the MacFUSE included in Parallels is old and suggested installing the latest MacFuse. This seemed to do the trick, but:
- the error message seems backwards - the operating system was new, MacFUSE was old
- why didn't the Parallels update for Leopard include the required new version of MacFUSE?
- why did I have to get the solution from some user instead of from Parallels? even if the user community discovered the solution, wouldn't it make sense for Parallels to post it? (in fairness, maybe they have, but I didn't find it if they did)
Now that Spotlight with Leopard lets you run the top application match by hitting enter, I was able to uninstall Google Desktop. (Nothing against Google Desktop, I just prefer to keep things simple if I can) (see my previous post)
Leopard also seems to have solved the issue of automatically mounting network drives. (see my previous post) so I was able to remove the login script I had created to do this, which was nice because it took a long time (why?) and slowed down logging in.
I do have a new complaint about OS X. The Finder doesn't have an option to show hidden files. I can understand hiding them by default, so does Windows. But at least Windows gives you a way to show them. This came up when I went to copy the .svn folder from a backup. It is possible to change Finder via a command line, but on top of not being user friendly, this also requires restarting Finder. This seems like an obvious weak point. Is there someone in Apple who refuses to recognize that you might occasionally want to see these files?
I'm still having problems with accessing my 4gb USB thumb drive from Parallels. At first I blamed this on the U3 software that came installed on it, but I removed this and reformatted and I'm still having problems. It works fine on my Windows machine at work. My current guess is that Parallels doesn't quite handle 4gb USB drives. The strange part is that it works fine, but after a short time it will hang during copying from it, and Windows Explorer can no longer access it. My 1gb USB thumb drive continues to work fine.
Ok, back to Ubuntu on Parallels. I copied the virtual machine that I had created on my MacBook over to my MacMini and started it up. No display problem, but the same problem with the Parallels Tool cd image showing garbled file names. Strangely I can't find anybody else with this particular problem. While flailing a bit more I rebooted the VM and lo and behold the Parallels Tools cd image had the right file names. I installed them and restarted X windows. It appears to work! One of the most noticeable features is being able to move the mouse seamlessly between OS X and the VM. I followed the same process on the MacBook and it also worked (although I could have sworn I tried rebooting before). So I appear to be back in business with Ubuntu (albeit starting from scratch with a new VM).
All in all, a successful day!
Saturday, December 15, 2007
CouchDB
I found CouchDB referenced from one of the posts about Amazon's SimpleDB since they are both apparently written in Erlang.
It's interesting that people are exploring some alternatives to relational databases.
It's also interesting that people are implementing "real" products in alternative languages like Erlang.
I recently picked up Programming Erlang but I haven't read it yet.
One thing that caught my eye looking through the CouchDB web site was a brief note that they compact the database while it's running, by copying to a new database. Currently Suneido requires you to occasionally shut down the database in order to compact it. I had always thought about "on line" compaction in terms of doing it "in place", but that gets tricky due to updating indexes to point to new locations. But if you build a new copy you don't have that problem. You could copy the bulk of the database in a single read-only transaction (like the current on-line backup does) and then pause activity briefly to get any updates done during the copy, and then switch over to the new database. Hmmm... actually doesn't sound too bad. (famous last words!)
It's interesting that people are exploring some alternatives to relational databases.
It's also interesting that people are implementing "real" products in alternative languages like Erlang.
I recently picked up Programming Erlang but I haven't read it yet.
One thing that caught my eye looking through the CouchDB web site was a brief note that they compact the database while it's running, by copying to a new database. Currently Suneido requires you to occasionally shut down the database in order to compact it. I had always thought about "on line" compaction in terms of doing it "in place", but that gets tricky due to updating indexes to point to new locations. But if you build a new copy you don't have that problem. You could copy the bulk of the database in a single read-only transaction (like the current on-line backup does) and then pause activity briefly to get any updates done during the copy, and then switch over to the new database. Hmmm... actually doesn't sound too bad. (famous last words!)
Friday, December 14, 2007
Amazon SimpleDB
Amazon has announced a new service - SimpleDB
We are pretty happy with our use of Amazon's S3 (Simple Storage Service)
I've been curious to try Amazon EC2 (Elastic Compute Cloud) but I haven't found a good application yet.
One of the big limitations with EC2 is that it's not well suited to running database servers. I've been waiting for them to improve support for this, but instead (or at least, first) we get SimpleDB.
I wonder if someone will make Rails work with SimpleDB as the database? How would the performance compare?
We are pretty happy with our use of Amazon's S3 (Simple Storage Service)
I've been curious to try Amazon EC2 (Elastic Compute Cloud) but I haven't found a good application yet.
One of the big limitations with EC2 is that it's not well suited to running database servers. I've been waiting for them to improve support for this, but instead (or at least, first) we get SimpleDB.
I wonder if someone will make Rails work with SimpleDB as the database? How would the performance compare?
Thursday, December 13, 2007
Finally!
Finally some good progress on the ACE version of the Suneido server. It's actually working well enough to run a client IDE from it, a major milestone. The last couple of problems were minor mistakes of mine. ACE and the Boehm GC seem to be working together.
Of course, this is just the start, now comes the "fun" part - actually making my code thread safe.
Of course, this is just the start, now comes the "fun" part - actually making my code thread safe.
Saturday, December 01, 2007
ACE + GC Progress
I spent a few more frustrating hours thrashing around trying to build and link with ACE statically.
Finally, I decided to start from scratch. Strangely "make clean" didn't clean up (as I discovered when a make after make clean didn't recompile!). (Note: Don't run make clean from the top level ACE_wrappers directory - it takes forever recursing into all the examples and tests.)
I rebuilt and ... it worked! Somehow I had still been getting left over shared library stuff. Another requirement is to #define ACE_AS_STATIC_LIBS before include the ACE headers.
Boy, that shouldn't have been so hard! But I can't really blame anyone but myself :-)
But ... now Suneido crashes right away on startup, which seemed more like a step backwards not forwards!
I created a small test program that used ACE and GC. It crashed the same way. Eventually, after another few hours of flailing I hit on the right combination. The key seems to be to initialize GC first, then ACE. But to achieve that, you have to prevent ACE from redefining "main" to do their startup. Here's my successful test program:
#define ACE_AS_STATIC_LIBS 1
#include "ace/Thread_Manager.h"
static ACE_THR_FUNC_RETURN thread_func(void* arg)
{
for (int i = 0; i <>
operator new(10000);
return 0;
}
extern "C" { void GC_init(); }
#undef main
int main(int argc, char**argv)
{
GC_init();
ACE::init();
ACE_Thread_Manager::instance()->spawn_n(2, thread_func);
ACE_Thread_Manager::instance()->wait();
}
At this point I'm quitting for the day. It should be easy to incorporate what I've learned into Suneido, but then I'll just run into the next problem. I'd rather end the day on a positive note!
Finally, I decided to start from scratch. Strangely "make clean" didn't clean up (as I discovered when a make after make clean didn't recompile!). (Note: Don't run make clean from the top level ACE_wrappers directory - it takes forever recursing into all the examples and tests.)
I rebuilt and ... it worked! Somehow I had still been getting left over shared library stuff. Another requirement is to #define ACE_AS_STATIC_LIBS before include the ACE headers.
Boy, that shouldn't have been so hard! But I can't really blame anyone but myself :-)
But ... now Suneido crashes right away on startup, which seemed more like a step backwards not forwards!
I created a small test program that used ACE and GC. It crashed the same way. Eventually, after another few hours of flailing I hit on the right combination. The key seems to be to initialize GC first, then ACE. But to achieve that, you have to prevent ACE from redefining "main" to do their startup. Here's my successful test program:
#define ACE_AS_STATIC_LIBS 1
#include "ace/Thread_Manager.h"
static ACE_THR_FUNC_RETURN thread_func(void* arg)
{
for (int i = 0; i <>
operator new(10000);
return 0;
}
extern "C" { void GC_init(); }
#undef main
int main(int argc, char**argv)
{
GC_init();
ACE::init();
ACE_Thread_Manager::instance()->spawn_n(2, thread_func);
ACE_Thread_Manager::instance()->wait();
}
At this point I'm quitting for the day. It should be easy to incorporate what I've learned into Suneido, but then I'll just run into the next problem. I'd rather end the day on a positive note!
Friday, November 30, 2007
Two Steps Forward, One Step Back
Or should that be One Step Forward, Two Steps back?
Yesterday I was back to working on the multi-threaded ACE Suneido server. It started off really well. I got the server actually running, could connect and disconnect, and make simple requests and get responses. A very good milestone.
The obvious next step is to try to start up a Suneido client IDE using the new server. This is a big jump because even to start up requires a large number of requests of different kinds. I didn't really expect it to work, but programmers are eternal optimists :-)
I got a variety of weird errors and memory faults - not surprising. But amongst these errors was one that said something like "GC collecting from unknown thread". Oh yeah, I knew this was going to be an issue but I'd pushed it to the back of my mind. The garbage collector needs to know about all the threads in order to take their stacks and registers etc. into account. The way the Boehm GC normally does this is by requiring you to call their create thread function which is a wrapper around the OS one.
The problem is, ACE is creating the threads. I found where ACE calls create thread and thankfully there was a single point I could modify to call the GC version. But, I was using ACE via a DLL which means it can't call functions in the main program (where the GC one is).
The obvious solution is to not use a DLL, to statically link in the ACE code. Sounds easy. I even found the option "static_libs=1" that would build static libraries. But it doesn't work. It builds a static library alright, but when I try to link it into Suneido I get a bunch of "multiple definitions" and "undefined references". Suspiciously, many of the names were "_imp_..." which seems a lot like the way DLL's work. My guess would be that "static_libs=1" isn't working fully, which isn't too surprising given that the "normal" usage is with shared libraries (DLL). In software, "taking the path less traveled" is often a recipe for frustration.
I started digging into the maze of headers, configs, makefiles, and ifdefs but I ran out of time. Presumably it's solvable. You can see why people like to use things like .Net or Java where at least some of these low level issues are handled for you.
At the same time as I was working on this, I downloaded Ubuntu 7.10 and created a new Parallels VM (on my MacBook while I was working on my main machine). I used the alternate cd with the text based installer as recommended by other people. It went quite smoothly (except for crashing OS X the first time I started the install), and no display problems. But when I tried to install the Parallels Tools, the disk image appeared "corrupted" - only a single file and its name was random garbage characters. I tried rebooting the VM, restarting Parallels, and rebooting the MacBook but it didn't help. I searched on the web but didn't find any references to this problem. I have upgraded my MacBook to Leopard (the new version of OS X) so the problem may be related to that. When I get time I'll try running this VM on my Mac Mini which hasn't been upgraded to Leopard yet.
Yesterday I was back to working on the multi-threaded ACE Suneido server. It started off really well. I got the server actually running, could connect and disconnect, and make simple requests and get responses. A very good milestone.
The obvious next step is to try to start up a Suneido client IDE using the new server. This is a big jump because even to start up requires a large number of requests of different kinds. I didn't really expect it to work, but programmers are eternal optimists :-)
I got a variety of weird errors and memory faults - not surprising. But amongst these errors was one that said something like "GC collecting from unknown thread". Oh yeah, I knew this was going to be an issue but I'd pushed it to the back of my mind. The garbage collector needs to know about all the threads in order to take their stacks and registers etc. into account. The way the Boehm GC normally does this is by requiring you to call their create thread function which is a wrapper around the OS one.
The problem is, ACE is creating the threads. I found where ACE calls create thread and thankfully there was a single point I could modify to call the GC version. But, I was using ACE via a DLL which means it can't call functions in the main program (where the GC one is).
The obvious solution is to not use a DLL, to statically link in the ACE code. Sounds easy. I even found the option "static_libs=1" that would build static libraries. But it doesn't work. It builds a static library alright, but when I try to link it into Suneido I get a bunch of "multiple definitions" and "undefined references". Suspiciously, many of the names were "_imp_..." which seems a lot like the way DLL's work. My guess would be that "static_libs=1" isn't working fully, which isn't too surprising given that the "normal" usage is with shared libraries (DLL). In software, "taking the path less traveled" is often a recipe for frustration.
I started digging into the maze of headers, configs, makefiles, and ifdefs but I ran out of time. Presumably it's solvable. You can see why people like to use things like .Net or Java where at least some of these low level issues are handled for you.
At the same time as I was working on this, I downloaded Ubuntu 7.10 and created a new Parallels VM (on my MacBook while I was working on my main machine). I used the alternate cd with the text based installer as recommended by other people. It went quite smoothly (except for crashing OS X the first time I started the install), and no display problems. But when I tried to install the Parallels Tools, the disk image appeared "corrupted" - only a single file and its name was random garbage characters. I tried rebooting the VM, restarting Parallels, and rebooting the MacBook but it didn't help. I searched on the web but didn't find any references to this problem. I have upgraded my MacBook to Leopard (the new version of OS X) so the problem may be related to that. When I get time I'll try running this VM on my Mac Mini which hasn't been upgraded to Leopard yet.
Thursday, November 29, 2007
Still Problems with Ubuntu 7.10 on Parallels on Mac
Larry has been patiently trying to help me with the network problem on my Ubuntu virtual machine (see the comments on Ubuntu on Parallels on Mac) He may have even found the problem.
But when I applied the fix and rebooted I was back to the problems from More Fun With Ubuntu on Parallels - unable to boot because the X display won't start. I did some more thrashing around and some more web searching. Lots of people seem to have this problem and there are various proposed solutions. I wonder if these "solutions" aren't really solutions - it just happened to work as it does for me occasionally. Frustratingly, lots of people also appear to not have this problem - it works fine for them.
This is on a Mac (mini) so you can't blame non-standard hardware. Ubuntu is one of the most common (if not the most common) version of Linux. I haven't loaded up OS X with a lot of junk software. So why am I still faced with these frustrating issues? I don't want to have to be an X Windows expert and mess around with xorg.conf. Running in a virtual machine adds some complexity, but it also reduces the complexity since the virtual machine is more "standard" than real machines.
I don't have a lot installed in my Ubuntu so I am going to try a fresh install. At least with VM's this doesn't mean I have to wipe out my previous one. It would be nice if Parallels had a prebuilt Ubuntu 7.10 VM but they only have 7.04. I could go back to 7.04 but sooner or later I'll want to update.
Some of my recurring frustrations are, no doubt, due to my staying to close to the "bleeding edge". But, (a) I want to keep up with the latest - that's part of my business, and (b) not doing updates has it's own problems with incompatibility, security, etc. and leaves you facing even scarier "big" updates, albeit not so often.
But when I applied the fix and rebooted I was back to the problems from More Fun With Ubuntu on Parallels - unable to boot because the X display won't start. I did some more thrashing around and some more web searching. Lots of people seem to have this problem and there are various proposed solutions. I wonder if these "solutions" aren't really solutions - it just happened to work as it does for me occasionally. Frustratingly, lots of people also appear to not have this problem - it works fine for them.
This is on a Mac (mini) so you can't blame non-standard hardware. Ubuntu is one of the most common (if not the most common) version of Linux. I haven't loaded up OS X with a lot of junk software. So why am I still faced with these frustrating issues? I don't want to have to be an X Windows expert and mess around with xorg.conf. Running in a virtual machine adds some complexity, but it also reduces the complexity since the virtual machine is more "standard" than real machines.
I don't have a lot installed in my Ubuntu so I am going to try a fresh install. At least with VM's this doesn't mean I have to wipe out my previous one. It would be nice if Parallels had a prebuilt Ubuntu 7.10 VM but they only have 7.04. I could go back to 7.04 but sooner or later I'll want to update.
Some of my recurring frustrations are, no doubt, due to my staying to close to the "bleeding edge". But, (a) I want to keep up with the latest - that's part of my business, and (b) not doing updates has it's own problems with incompatibility, security, etc. and leaves you facing even scarier "big" updates, albeit not so often.
Tuesday, November 27, 2007
Slow Progress on Multi-Threading Suneido
Last time I worked on this I had just reached the point where I could successfully compile and link. I resisted the temptation to try running it because I figured there were bound to be problems.
Sure enough, I run it ... and it crashes with a memory fault. No big surprise. I put in a few print's to see where it's getting to. Hmmm ... it's not even getting to my main!?
Download and install MinGW GDB and use it to find it's crashing in the garbage collector code. I am redefining operator new to use the garbage collector so presumably something is calling operator new during static initialization. I comment out my redefinition and it works. I make a tiny test program:
#include "include/gc.h"
void* stat = GC_malloc(10);
int main(int argc, char** argv)
{
return 0;
}
It crashes. But this works in the current version of Suneido. It must be thread related. Yup, my test program runs fine without threads. Now what?
I search the web to see if this is a known problem but I don't find anything.
I'm still using gc-6.5 and the notes for the latest gc-7.0 mention improvements to win32 threads.
So I download gc-7.0 The readme says MinGW is not well tested/supported - ugh. For gc-6.5 I had ended up writing my own makefile but I'd prefer to avoid that. The recommended approach seems to be the standard configure & make so I try this with MinGW MSys.
configure seems to succeed, at least with no obvious errors, but make fails with a bunch of "undefined references". It appears to be trying to make a dll which I don't want - I want a static library. Eventually I hit on configure --enable-shared=0 which avoids the dll stuff but still gives a bunch of "undefined references". This time they all appear to be from win32_threads.c For some reason this isn't getting included in the build. I uncomment am__objects_3 = win32_threads.lo in the generated Makefile to fix this. That's probably not the correct solution but it does the trick and I finally get the build to succeed. gctest runs successfully although it seems slower and in the output the heap is twice as big as with gc-6.5 - not good, but I'll worry about it later!
Thankfully this effort wasn't wasted and my test program runs successfully. And Suneido now manages to get to main! But then it fails with ACE errors saying WSA Startup isn't initialized. This is easily fixed by adding ACE::Init but it's strange because I didn't need it in my previous ACE test program.
After most of a day's work I'm finally back to where I can start debugging my own code! It's great to be able to leverage other people's work (like the Boehm GC and ACE) but it can be extremely frustrating when they don't work and you don't have a clue what's going on. Even the standard configure & make has the same problem. If it works it's great, but if it doesn't you're faced with incomprehensible makefile's.
Sure enough, I run it ... and it crashes with a memory fault. No big surprise. I put in a few print's to see where it's getting to. Hmmm ... it's not even getting to my main!?
Download and install MinGW GDB and use it to find it's crashing in the garbage collector code. I am redefining operator new to use the garbage collector so presumably something is calling operator new during static initialization. I comment out my redefinition and it works. I make a tiny test program:
#include "include/gc.h"
void* stat = GC_malloc(10);
int main(int argc, char** argv)
{
return 0;
}
It crashes. But this works in the current version of Suneido. It must be thread related. Yup, my test program runs fine without threads. Now what?
I search the web to see if this is a known problem but I don't find anything.
I'm still using gc-6.5 and the notes for the latest gc-7.0 mention improvements to win32 threads.
So I download gc-7.0 The readme says MinGW is not well tested/supported - ugh. For gc-6.5 I had ended up writing my own makefile but I'd prefer to avoid that. The recommended approach seems to be the standard configure & make so I try this with MinGW MSys.
configure seems to succeed, at least with no obvious errors, but make fails with a bunch of "undefined references". It appears to be trying to make a dll which I don't want - I want a static library. Eventually I hit on configure --enable-shared=0 which avoids the dll stuff but still gives a bunch of "undefined references". This time they all appear to be from win32_threads.c For some reason this isn't getting included in the build. I uncomment am__objects_3 = win32_threads.lo in the generated Makefile to fix this. That's probably not the correct solution but it does the trick and I finally get the build to succeed. gctest runs successfully although it seems slower and in the output the heap is twice as big as with gc-6.5 - not good, but I'll worry about it later!
Thankfully this effort wasn't wasted and my test program runs successfully. And Suneido now manages to get to main! But then it fails with ACE errors saying WSA Startup isn't initialized. This is easily fixed by adding ACE::Init but it's strange because I didn't need it in my previous ACE test program.
After most of a day's work I'm finally back to where I can start debugging my own code! It's great to be able to leverage other people's work (like the Boehm GC and ACE) but it can be extremely frustrating when they don't work and you don't have a clue what's going on. Even the standard configure & make has the same problem. If it works it's great, but if it doesn't you're faced with incomprehensible makefile's.
Sunday, November 25, 2007
Positive Feedback for a Change!
It seems like all I do is complain about my frustrations with computers so I thought I should post a positive comment for a change.
I went to check what version of g++ I had on my Ubuntu on Parallels. Here's what I got:
andrew@MacMini-Ubuntu:~$ g++ --version
The program 'g++' can be found in the following packages:
* g++
* pentium-builder
Try: sudo apt-get install <selected>
bash: g++: command not found
This is a vast improvement over just "command not found". Thumbs up to Ubuntu (or Linux or wherever this originated).
I went to check what version of g++ I had on my Ubuntu on Parallels. Here's what I got:
andrew@MacMini-Ubuntu:~$ g++ --version
The program 'g++' can be found in the following packages:
* g++
* pentium-builder
Try: sudo apt-get install <selected>
bash: g++: command not found
This is a vast improvement over just "command not found". Thumbs up to Ubuntu (or Linux or wherever this originated).
Another Mac Printer Annoyance
Just when I thought I had my printer problems figured out, I have a new one.
If I boot up with the printer turned off i.e. turn it on after OS X has booted, then it won't work. Not only won't work, but hangs OS X for several minutes. The first few times I thought OS X had crashed but if I'm patient enough it will come back. It's especially frustrating when I inadvertently leave Lightroom in the Print module because then it hangs when I run it. Or if I forget and switch to the Print module.
It might not be so bad if I could put the printer on the power bar with the Mac so it would get turned on at the same time. But you're not supposed to shut off the power to the printer without turning it off and I'm sure I'd forget.
It's strange (as usual) because the printer is connected to the Mac with USB - which should (and did before) handle turning on later. Maybe it's related to the work around I had to do to access the printer from Parallels since that is network related. (I'm thinking network because that's the only thing I can think of that would hang for several minutes.) I guess I could try disabling or removing the extra printer I have set up for that but I'm not sure I'm in the mood for it. And I'd probably just break something else.
It seems like a lot of stuff these days assumes you're going to leave it turned on all the time. But that's not great for energy efficiency.
If I boot up with the printer turned off i.e. turn it on after OS X has booted, then it won't work. Not only won't work, but hangs OS X for several minutes. The first few times I thought OS X had crashed but if I'm patient enough it will come back. It's especially frustrating when I inadvertently leave Lightroom in the Print module because then it hangs when I run it. Or if I forget and switch to the Print module.
It might not be so bad if I could put the printer on the power bar with the Mac so it would get turned on at the same time. But you're not supposed to shut off the power to the printer without turning it off and I'm sure I'd forget.
It's strange (as usual) because the printer is connected to the Mac with USB - which should (and did before) handle turning on later. Maybe it's related to the work around I had to do to access the printer from Parallels since that is network related. (I'm thinking network because that's the only thing I can think of that would hang for several minutes.) I guess I could try disabling or removing the extra printer I have set up for that but I'm not sure I'm in the mood for it. And I'd probably just break something else.
It seems like a lot of stuff these days assumes you're going to leave it turned on all the time. But that's not great for energy efficiency.
Saturday, November 24, 2007
More on Multi-Threading Suneido
I am continuing to intermittently work on making a multi-threaded Suneido server (that will also run on Linux as well as Windows).
So far, I have stripped out all the Windows dependencies and just about have the socket stuff converted to ACE. It should not take too much more to get this working - but only single threaded.
In a sense, this is the easy part. The hard part is to add the required locking etc. to allow it to run safely without the multiple threads interfering with each other.
There are two sides to this - the database and the language run time (virtual machine).
On the database side, the data itself should not be a problem because it is immutable (never updated). When records are updated the new version is added to the end of the database file. (This is the main reason the database needs to be "compacted" periodically.) The primary reason for this is to support the multi-version optimistic database transactions but it also ends up being nice for multi-threading.
The indexes are the main issue in the database. The easiest solution is probably locking at a fairly granular level of entire indexes. There are schemes for locking individual index nodes, but this is tricky. It should be easy to use multiple readers OR single writer locking. The downside is that if there is a lot of updating it will end up being single-threaded again.
Ideally, I would like to support multiple readers concurrently with updating (but still only one writer at a time) similar to how multi-version optimistic transactions allow multiple readers to operate independently of updates. But I have not figured out any "easy" way to make the indexes similarly multi-version so readers are not blocked by updates.
This may not be critical because I am pretty sure read's are far more common than update's. I should really measure this instead of guessing!
The other side is the language virtual machine. This should not be too bad because there is not much shared mutable (updatable) data. The main shared data structure is the table of global definitions loaded from libraries. The only time this is modified is when a new definition needs to be loaded from a library. At one time I thought I could use the double checked locking pattern (DCLP) to avoid synchronizing every access, but DCLP has been found to be fatally flawed. In theory, with 32 bit values and an idempotent function it is still workable, but given the history it seems risky. Another way to avoid the synchronization overhead would be for each thread to have its own globals table "cache" and to load this from a shared synchronized table.
I am sure there are many other fun issues lurking in this project. I am still very paranoid about synchronization issues that do not show up until after deploying it to hundreds of customers. My first line defense will be to try to keep the locking simple so I can have a fair chance of convincing myself logically that it is "correct". (Although judging by DCLP, even very smart people can fail to catch concurrency flaws in even simple code.) The next line of defense will be some serious stress testing, probably on something like a quad core machine to increase the chances of conflicts.
So far, I have stripped out all the Windows dependencies and just about have the socket stuff converted to ACE. It should not take too much more to get this working - but only single threaded.
In a sense, this is the easy part. The hard part is to add the required locking etc. to allow it to run safely without the multiple threads interfering with each other.
There are two sides to this - the database and the language run time (virtual machine).
On the database side, the data itself should not be a problem because it is immutable (never updated). When records are updated the new version is added to the end of the database file. (This is the main reason the database needs to be "compacted" periodically.) The primary reason for this is to support the multi-version optimistic database transactions but it also ends up being nice for multi-threading.
The indexes are the main issue in the database. The easiest solution is probably locking at a fairly granular level of entire indexes. There are schemes for locking individual index nodes, but this is tricky. It should be easy to use multiple readers OR single writer locking. The downside is that if there is a lot of updating it will end up being single-threaded again.
Ideally, I would like to support multiple readers concurrently with updating (but still only one writer at a time) similar to how multi-version optimistic transactions allow multiple readers to operate independently of updates. But I have not figured out any "easy" way to make the indexes similarly multi-version so readers are not blocked by updates.
This may not be critical because I am pretty sure read's are far more common than update's. I should really measure this instead of guessing!
The other side is the language virtual machine. This should not be too bad because there is not much shared mutable (updatable) data. The main shared data structure is the table of global definitions loaded from libraries. The only time this is modified is when a new definition needs to be loaded from a library. At one time I thought I could use the double checked locking pattern (DCLP) to avoid synchronizing every access, but DCLP has been found to be fatally flawed. In theory, with 32 bit values and an idempotent function it is still workable, but given the history it seems risky. Another way to avoid the synchronization overhead would be for each thread to have its own globals table "cache" and to load this from a shared synchronized table.
I am sure there are many other fun issues lurking in this project. I am still very paranoid about synchronization issues that do not show up until after deploying it to hundreds of customers. My first line defense will be to try to keep the locking simple so I can have a fair chance of convincing myself logically that it is "correct". (Although judging by DCLP, even very smart people can fail to catch concurrency flaws in even simple code.) The next line of defense will be some serious stress testing, probably on something like a quad core machine to increase the chances of conflicts.
Thursday, November 22, 2007
The Chumby Has Landed
My Chumby finally arrived. I was happy to see natural cloth packaging instead of yet another frustrating bubble pack. Of course, the first thing it wanted to do after I set it up was to download a bunch of updates. [aside: Automatic updates seem like a great idea, and they would be if they were unobtrusive. But every time I try to do anything on my computers something wants to do an update and disrupt me while it does it. It's especially bad on machines that I don't leave turned on and don't necessarily use every day.]
Here's what I have currently playing on my Chumby:
I'd like to make my own widgets but it looks like that requires using Flash which I haven't done before. For example, the Chumby would make a great (although expensive) status monitor for our automated tests.
Here's what I have currently playing on my Chumby:
I'd like to make my own widgets but it looks like that requires using Flash which I haven't done before. For example, the Chumby would make a great (although expensive) status monitor for our automated tests.
Monday, November 19, 2007
Amazon's Kindle Released
Amazon has released its Kindle electronic book reader
And once more, it's US only :-( That may be partly because it uses EV-DO cell phone technology to connect, although you'd think it would make sense to support other methods like regular WiFi. It does have a USB connection but I'm not sure if that means you can load books from your computer.
They have monthly fees for some of the services and that's a little scary. But they don't charge you (directly) for the EV-DO so that's nice.
We'll have to wait and see how much Amazon locks the device to their services and how much they open it up. If they were planning to make their money off the service then you'd expect a lower price (like cell phones). If they expect to sell it for full price AND lock you into their service that won't be too attractive.
And once more, it's US only :-( That may be partly because it uses EV-DO cell phone technology to connect, although you'd think it would make sense to support other methods like regular WiFi. It does have a USB connection but I'm not sure if that means you can load books from your computer.
They have monthly fees for some of the services and that's a little scary. But they don't charge you (directly) for the EV-DO so that's nice.
We'll have to wait and see how much Amazon locks the device to their services and how much they open it up. If they were planning to make their money off the service then you'd expect a lower price (like cell phones). If they expect to sell it for full price AND lock you into their service that won't be too attractive.
Sunday, November 18, 2007
Web Services
A few years ago when "web services" were starting to get talked about I read Web Services Essentials which covers XML-RPC, SOAP, UDI, and WSDL. I ended up writing a simple XmlRpc implementation for Suneido and we use it to distribute a particular service for our vertical application.
More recently I just finished reading RESTful Web Services (recommended). I realize now, that for our application there was no reason to use XmlRpc, that a simple RESTful web service (i.e. just using GET and POST) would have worked just as well and been much simpler. On the positive side, at least I didn't try to implement SOAP, UDI, and WSDL!
REST stands for Representational State Transfer which doesn't tell you a whole lot. Basically it is a resource oriented style using basic HTTP GET, POST, PUT, and DELETE.
It seems strange that Web Services Essentials didn't even mention the option of a REST style web service. It's only recently that REST has gained popularity, but you'd think they'd at least mention that you can just use GET and POST. Maybe they assumed that you already knew about that option, but it was all new to me at the time so I assumed the book covered the options.
Of course, if you want to communicate with an existing service you have to use whatever they supply (e.g. SOAP). But for our application we were controlled both the server and the client so we were free to use whatever we wanted.
David Heinemeier Hansson, the originator of the Rails framework for Ruby, is a fan of REST and the latest version of Rails has support for this style.
After reading RESTful Web Services I've been working on improving Suneido's HttpServer to make it easier to implement REST services. We need a new service for our application and it seems like a good opportunity to try a different style.
More recently I just finished reading RESTful Web Services (recommended). I realize now, that for our application there was no reason to use XmlRpc, that a simple RESTful web service (i.e. just using GET and POST) would have worked just as well and been much simpler. On the positive side, at least I didn't try to implement SOAP, UDI, and WSDL!
REST stands for Representational State Transfer which doesn't tell you a whole lot. Basically it is a resource oriented style using basic HTTP GET, POST, PUT, and DELETE.
It seems strange that Web Services Essentials didn't even mention the option of a REST style web service. It's only recently that REST has gained popularity, but you'd think they'd at least mention that you can just use GET and POST. Maybe they assumed that you already knew about that option, but it was all new to me at the time so I assumed the book covered the options.
Of course, if you want to communicate with an existing service you have to use whatever they supply (e.g. SOAP). But for our application we were controlled both the server and the client so we were free to use whatever we wanted.
David Heinemeier Hansson, the originator of the Rails framework for Ruby, is a fan of REST and the latest version of Rails has support for this style.
After reading RESTful Web Services I've been working on improving Suneido's HttpServer to make it easier to implement REST services. We need a new service for our application and it seems like a good opportunity to try a different style.
Bug Labs
I recently listened to a podcast with one of the people from Bug Labs. I like gadgets and this looks like a pretty nifty one.
When I was a kid I did a lot of hardware hacking, building computers and other gadgets. But I haven't done any hardware for a long time. This looks like something where you could build a "gadget" without actually getting your hands dirty.
I have an idea for a hand held gadget I'd like to make with a GPS. Most hand held GPS units aren't programmable so they won't work. Another option would be to use a cellphone with a GPS but from what I've heard software development isn't easy. And I'm not sure I want to be tied to a particular cellphone, especially if I wanted to resell these gadgets.
A "Bug" seems like a great way to develop a prototype. All I'd need for the gadget I'm thinking of would be the base unit plus the GPS module. They even plan to offer a service to convert your Bug "prototype" to a more packaged gadget that they will manufacture.
Unfortunately, it's not available yet, but the web site says 4th quarter of this year so it shouldn't be too long. And they haven't published any prices yet either.
When I was a kid I did a lot of hardware hacking, building computers and other gadgets. But I haven't done any hardware for a long time. This looks like something where you could build a "gadget" without actually getting your hands dirty.
I have an idea for a hand held gadget I'd like to make with a GPS. Most hand held GPS units aren't programmable so they won't work. Another option would be to use a cellphone with a GPS but from what I've heard software development isn't easy. And I'm not sure I want to be tied to a particular cellphone, especially if I wanted to resell these gadgets.
A "Bug" seems like a great way to develop a prototype. All I'd need for the gadget I'm thinking of would be the base unit plus the GPS module. They even plan to offer a service to convert your Bug "prototype" to a more packaged gadget that they will manufacture.
Unfortunately, it's not available yet, but the web site says 4th quarter of this year so it shouldn't be too long. And they haven't published any prices yet either.
Saturday, November 17, 2007
Chumby's Coming
I found someone in the US to order my Chumby for me and it is in the mail on its way to me (hopefully the post office hasn't lost it - it's taking a while).
Dave Winer got his and is pretty positive.
Dave Winer got his and is pretty positive.
Thursday, November 15, 2007
More Fun With Ubuntu on Parallels
I started my Ubuntu virtual machine to get my networking file to continue looking into the network not starting automatically.
A notice came up about upgrading from 7.04 to 7.10. Without really thinking about it (stupid) I went ahead and ran the upgrade. It worked fine till it was finished and it restarted and the X display couldn't start. So I did what I should have done before I started the upgrade and I googled for problems with 7.10 on Parallels. Sure enough, lots of other people were having the same problem. There were various suggestions of how to work around the problem but there didn't seem to be a consensus. Parallels themselves seem to be avoiding the issue. I followed one of the suggestions and booted in recovery mode, which took me to a terminal window but I wasn't sure where to go from there. Then I tried booting the older kernel which seemed to start ok. Then I tried re-installing the Parallels tools (another suggestion). It ended up with the same problem of not being able to restart the display. I stopped and restarted the machine and let it boot normally and it worked!? I have no idea what that means. Is it fixed? Which part of my thrashing around was helpful? Or is the problem intermittent and I just happened to get lucky?
I'd had enough for one day so I just suspended the machine and left it.
By the way, it still has the same problem of not starting the networking automatically. I guess it was too much to hope that problem would go away by itself.
A notice came up about upgrading from 7.04 to 7.10. Without really thinking about it (stupid) I went ahead and ran the upgrade. It worked fine till it was finished and it restarted and the X display couldn't start. So I did what I should have done before I started the upgrade and I googled for problems with 7.10 on Parallels. Sure enough, lots of other people were having the same problem. There were various suggestions of how to work around the problem but there didn't seem to be a consensus. Parallels themselves seem to be avoiding the issue. I followed one of the suggestions and booted in recovery mode, which took me to a terminal window but I wasn't sure where to go from there. Then I tried booting the older kernel which seemed to start ok. Then I tried re-installing the Parallels tools (another suggestion). It ended up with the same problem of not being able to restart the display. I stopped and restarted the machine and let it boot normally and it worked!? I have no idea what that means. Is it fixed? Which part of my thrashing around was helpful? Or is the problem intermittent and I just happened to get lucky?
I'd had enough for one day so I just suspended the machine and left it.
By the way, it still has the same problem of not starting the networking automatically. I guess it was too much to hope that problem would go away by itself.
Wednesday, November 14, 2007
Groovy Style "builders" in Suneido
One of the neat features in Groovy is its "builders". For example, using an XML markup builder:
It made me wonder how close I could come to this with Suneido. Here's what I came up with:
I had to make to make a slight fix to Suneido to make this work. The approach I used was to use instance.Eval(function) to evaluate the blocks in the "context" of the builder. But I found that Eval didn't work with blocks (only functions). Luckily it was easy to fix. (Actually, I'm using Eval2 which returns the result inside an object so you can determine if there was a return value or not.)
builder.invoiceswhich produces:
{
invoice(number: 1234)
{
item(type: 'part')
{ product(name: 'widget', cost: 100) }
}
}
<invoices>The builder doesn't actually have methods for "invoice", "part", etc. Instead, dynamic language "tricks" are used to catch "unknown" method calls.
<invoice number="1234">
<item type="part">
<product name="widget" cost="100" />
</item>
</invoice>
</invoices>
It made me wonder how close I could come to this with Suneido. Here's what I came up with:
builder.invoices()The main difference is that Suneido requires '.' on method calls. Otherwise it's pretty much identical. One thing the Groovy XML builder doesn't handle is tag contents containing a mixture of text and tags. I handled this in Suneido with a special '_' method. For example:
{
.invoice(number: 1234)
{
.item(type: 'part')
{ .product(name: 'widget', cost: 100) }
}
}
builder.p()which produces:
{
._('start ')
.b() { 'middle' }
._(' end')
}
<p>start <b>middle</b> end</p>Here is the entire implementation of a Suneido XmlBuilder:
classOne minor problem with the Suneido version is that certain methods are "built in" (e.g. Size) and therefore can't be used in the builder. [This is a result of trying to make class instances behave the same as generic containers. I'm starting to think this was a mistake, but I'm not sure how to go about changing something so fundamental.]
{
New()
{ .s = '' }
Default(@args)
{
.s $= '<' $ args[0]
for m in args.Members(named:)
if m isnt #block
.s $= ' ' $ m $ '="' $ args[m] $ '"'
if args.Member?(#block)
{
.s $= '>'
result = .Eval2(args.block)
if result.Size() is 1
.s $= result[0]
.s $= '</' $ args[0] $ '>'
}
else
.s $= ' />'
return
}
_(s)
{ .s $= s; return }
ToString()
{ return .s }
}
I had to make to make a slight fix to Suneido to make this work. The approach I used was to use instance.Eval(function) to evaluate the blocks in the "context" of the builder. But I found that Eval didn't work with blocks (only functions). Luckily it was easy to fix. (Actually, I'm using Eval2 which returns the result inside an object so you can determine if there was a return value or not.)
Tuesday, November 13, 2007
PyPy, LLVM, and Parrot
I recently came across some references to PyPy - a Python version/compiler (and more) implemented in Python. Interesting stuff, but a little hard to follow.
A reference from there led me to LLVM - Low Level Virtual Machine, which is actually a compiler (including JIT) as well as a virtual machine. Check out the tutorial on implementing a language with LLVM - very slick. They discuss garbage collection (including using the Boehm collector that Suneido uses) but this area appears to still be a work in progress.
Another project along these lines is Parrot - the new Perl virtual machine.
A reference from there led me to LLVM - Low Level Virtual Machine, which is actually a compiler (including JIT) as well as a virtual machine. Check out the tutorial on implementing a language with LLVM - very slick. They discuss garbage collection (including using the Boehm collector that Suneido uses) but this area appears to still be a work in progress.
Another project along these lines is Parrot - the new Perl virtual machine.
Monday, October 29, 2007
Groovy First Impressions
I recently picked up a copy of Groovy in Action. Groovy is a dynamic language based on Java infrastructure - it compiles to Java byte codes, runs on Java virtual machines, and has full access to Java libraries. You can call Java from Groovy and vice versa.
I just started reading the book, but the Groovy language looks interesting. There are some close similarities with Suneido, and of course differences. For example, this could be either Suneido or Groovy:
list = [1, 2, 3]
map = [name: "Fred", age: 24]
although Suneido would also allow this but Groovy wouldn't:
listmap = [1, 2, 3, name: "Fred", age: 24]
This is because Suneido uses a combined list/map data type whereas Groovy has separate list and map types.
BTW I like this map notation better than Ruby's, which uses a lot more punctuation:
map = { :name => "fred", :age => 24 }
Groovy's closures are also quite similar to Suneido's blocks:
Groovy: { arg -> ... }
Suneido: { |arg| ... }
I borrowed Suneido's block syntax from Smalltalk. The extra '|' before the parameters makes it easier to parse.
One feature I like is that you can write:
list.each { key, value -> ... }
whereas in Suneido you'd need parenthesis after the "each" function call:
list.each() { |key, value| ... }
This might not be too hard to add to Suneido since it's currently a syntax error (not ambiguous). In both Groovy and Suneido this is equivalent to list.each({...})
In addition to =~ for regular expression matching (the same as Suneido) Groovy also has ==~ which must match the entire string i.e. the same as "^(...)$". I'm tempted to add this to Suneido because it's a common mistake to omit the '^' and '$' and/or the parenthesis.
That's about as far as I've gotten. I'm not sure if I'll ever use Groovy but it's always interesting to look at other languages. And since the Java platform is quite ubiquitous, that makes Groovy more widely applicable. Groovy also has a web framework called Grails that is similar to Ruby on Rails.
I just started reading the book, but the Groovy language looks interesting. There are some close similarities with Suneido, and of course differences. For example, this could be either Suneido or Groovy:
list = [1, 2, 3]
map = [name: "Fred", age: 24]
although Suneido would also allow this but Groovy wouldn't:
listmap = [1, 2, 3, name: "Fred", age: 24]
This is because Suneido uses a combined list/map data type whereas Groovy has separate list and map types.
BTW I like this map notation better than Ruby's, which uses a lot more punctuation:
map = { :name => "fred", :age => 24 }
Groovy's closures are also quite similar to Suneido's blocks:
Groovy: { arg -> ... }
Suneido: { |arg| ... }
I borrowed Suneido's block syntax from Smalltalk. The extra '|' before the parameters makes it easier to parse.
One feature I like is that you can write:
list.each { key, value -> ... }
whereas in Suneido you'd need parenthesis after the "each" function call:
list.each() { |key, value| ... }
This might not be too hard to add to Suneido since it's currently a syntax error (not ambiguous). In both Groovy and Suneido this is equivalent to list.each({...})
In addition to =~ for regular expression matching (the same as Suneido) Groovy also has ==~ which must match the entire string i.e. the same as "^(...)$". I'm tempted to add this to Suneido because it's a common mistake to omit the '^' and '$' and/or the parenthesis.
That's about as far as I've gotten. I'm not sure if I'll ever use Groovy but it's always interesting to look at other languages. And since the Java platform is quite ubiquitous, that makes Groovy more widely applicable. Groovy also has a web framework called Grails that is similar to Ruby on Rails.
Friday, October 26, 2007
Freebase and Cinespin
I recently listened to a podcast with one of the developers of Freebase. I've been meaning to have a look at Freebase for a while. One of the people behind it is Danny Hillis of Thinking Machines.
For an interesting application based on Freebase data, have a look at Cinespin.
For an interesting application based on Freebase data, have a look at Cinespin.
Thursday, October 25, 2007
Amazing Open Source
The variety and quality of open source software even in specialized areas is pretty amazing.
Art Gallery - Art of Illusion
Art Gallery - Art of Illusion
Monday, October 22, 2007
Use at Least Two Compilers
I recently made some minor changes to Suneido. I compiled with MinGW and it worked fine.
A bit later I compiled with Visual C++ 7 (2003) since that produces the smallest, fastest code at the moment. It wouldn't run at all - crashed immediately on start up!?
I recompiled the MinGW version - it still worked fine.
I reverted my changes and the VC7 version worked again so it definitely appeared to be my changes.
I checked my changes several times but they looked fine.
Finally I found the problem. While making my changes I had done some minor refactoring. (I know, you shouldn't mix the two, but it seemed minor.) I had found something like:
Lessons:
A bit later I compiled with Visual C++ 7 (2003) since that produces the smallest, fastest code at the moment. It wouldn't run at all - crashed immediately on start up!?
I recompiled the MinGW version - it still worked fine.
I reverted my changes and the VC7 version worked again so it definitely appeared to be my changes.
I checked my changes several times but they looked fine.
Finally I found the problem. While making my changes I had done some minor refactoring. (I know, you shouldn't mix the two, but it seemed minor.) I had found something like:
int fn1()So I eliminated the duplication by moving the constant outside the functions:
{
static const list = symnum("list");
...
}
int fn2()
{
static const list = symnum("list");
...
}
static const list = symnum("list");
int fn1()
{
...
}
int fn2()
{
...
}The problem is that this changes when symnum is called. The old way it wasn't called until the functions were used the first time. The new way it is called during startup, probably before something else that it needs is initialized. The order of initialization is undefined and obviously varies between compilers. Sure enough, putting this back fixed the problem.Lessons:
- build with multiple compilers to catch this kind of dependence on undefined behavior
- don't mix changes and refactoring, do them one at a time, test in between
Saturday, October 20, 2007
From Ruby on Rails to PHP
7 reasons I switched back to PHP after 2 years on Rails - O'Reilly Ruby
A lot of the points he makes I find familiar from Suneido. Most of the time you can reproduce the needed parts of some other whiz-bang system in less time than it takes to learn the other system. It won't do everything that other system does, but if it does what you need, who cares?
We've run into some of the same frustrations he mentions with our Rails project (eTrux). We didn't have the problem of trying to deal with an existing database (we were starting from scratch), but it always seems like there's something you want to do that isn't on the Rails easy path. But despite the frustrations, I think Rails was a good choice for this project.
A lot of the points he makes I find familiar from Suneido. Most of the time you can reproduce the needed parts of some other whiz-bang system in less time than it takes to learn the other system. It won't do everything that other system does, but if it does what you need, who cares?
We've run into some of the same frustrations he mentions with our Rails project (eTrux). We didn't have the problem of trying to deal with an existing database (we were starting from scratch), but it always seems like there's something you want to do that isn't on the Rails easy path. But despite the frustrations, I think Rails was a good choice for this project.
My Chumby! Or Maybe Not
I finally received my invitation to get one of the "insider" early release Chumby's.I go to order, pick the color, get to the checkout, and find that it's United States only. Argh!
I realize Free Trade doesn't really mean free trade, but I find the US export regulations pretty frustrating and ridiculous. This thing is all open source software and I can't imagine the hardware is anything special - what are they protecting? I probably shouldn't put all the blame on the US either. Canada has it's own safety code and approval process that devices have to pass. To an ignorant consumer it seems like the US and Canada are similar enough that they could agree on a standard approval process, but I'm sure that's unlikely.
I'm not blaming the companies, I'm sure it's a hassle for them too. And US companies probably (rightly) see Canada as a minor market that's not worth much effort, at least initially.
Sigh. Maybe I can find someone in the US to order my Chumby for me.
(Don't get the wrong idea, it's not like I'm "desperate" for a Chumby. I'm sure I could live without it :-) But us geeks like our toys, especially if it's a new toy that not everyone has!)
[I see from my earlier post that I already knew it was US only. In the excitement of my invitation I obviously forgot that. Either that, or I'm just getting old and forgetful!]
Friday, October 19, 2007
Using S3 for Customer Backups
For some time we've been using S3 to back up our customers systems.
Originally we set them up to FTP the data back to our office. This was handy for us because if we wanted to look at their data we had it in-house. But as the number of clients and the size of their data grew we ran out of bandwidth. (Thankfully we did these transfers at night so it didn't slow down our daytime internet access.)
The other problem was the sheer size of the data and the issues of how to guard against disk failure. Although we don't promote this to our clients as a "backup" service many of them still end up falling back on us when they discover their own backups weren't done or were no good. And people often don't discover problems till days or weeks later so they need older copies, not just the most recent.
We decided to give S3 a try and so far it has worked out well. We're up to about 8 gb of data transfer per night and we currently have about 240 gb of storage. This is costing us about $50 per month - a bargain as far as I'm concerned.
We are currently keeping the last 8 days, last 5 weeks, last 13 months, and every year, or something like 25 - 30 copies per customer. On top of this redundancy, Amazon says they store multiple copies of each file.
A minor downside is that when we want to look at someone's data we have to download it, but that's not a big deal. And it's still a lot easier to download from S3 than to download directly from the customer.
There is a potential concern with storing data with a third party, but we encrypt the files and Amazon has decent security on top of that, so it seems ok. It's doesn't seem any worse than other hosting situations.
Overall, we're pretty happy with this setup.
Originally we set them up to FTP the data back to our office. This was handy for us because if we wanted to look at their data we had it in-house. But as the number of clients and the size of their data grew we ran out of bandwidth. (Thankfully we did these transfers at night so it didn't slow down our daytime internet access.)
The other problem was the sheer size of the data and the issues of how to guard against disk failure. Although we don't promote this to our clients as a "backup" service many of them still end up falling back on us when they discover their own backups weren't done or were no good. And people often don't discover problems till days or weeks later so they need older copies, not just the most recent.
We decided to give S3 a try and so far it has worked out well. We're up to about 8 gb of data transfer per night and we currently have about 240 gb of storage. This is costing us about $50 per month - a bargain as far as I'm concerned.
We are currently keeping the last 8 days, last 5 weeks, last 13 months, and every year, or something like 25 - 30 copies per customer. On top of this redundancy, Amazon says they store multiple copies of each file.
A minor downside is that when we want to look at someone's data we have to download it, but that's not a big deal. And it's still a lot easier to download from S3 than to download directly from the customer.
There is a potential concern with storing data with a third party, but we encrypt the files and Amazon has decent security on top of that, so it seems ok. It's doesn't seem any worse than other hosting situations.
Overall, we're pretty happy with this setup.
G.ho.st - Global Hosted Operating System
G.ho.st – Home Page
Apparently this is implemented using Amazon S3 and EC2. Interesting idea, but pretty annoying how they maximize my browser window and hide the address and tool bars.
Apparently this is implemented using Amazon S3 and EC2. Interesting idea, but pretty annoying how they maximize my browser window and hide the address and tool bars.
Wednesday, October 17, 2007
Thursday, October 11, 2007
Telnet now optional in Vista
Today I went to use telnet to test some server code I was working on. Except there's no telnet!?
A quick web search tells me it's not "turned on" by default, you have to go to:
Control Panel > Programs and Features > Turn Windows features on or off
Microsoft's justification for this is to decrease the "footprint" of Windows and to increase security. Neither really makes much sense. The telnet client is a tiny program and having it in a directory on the path versus some other "turned off" directory doesn't seem to change the "footprint" much. The code to turn it on and off is probably bigger than telnet itself. And I'm not sure how a telnet client is much of a security risk. Any attack that gets far enough into your system to use your telnet client is probably not going to be stopped by it being "turned off", especially since it could easily "turn it on" from the command line. (I can see leaving the telnet service turned off, but that's not what I'm talking about.)
I'm really tempted to do some Microsoft bashing at this point, but I'll restrain myself.
A quick web search tells me it's not "turned on" by default, you have to go to:
Control Panel > Programs and Features > Turn Windows features on or off
Microsoft's justification for this is to decrease the "footprint" of Windows and to increase security. Neither really makes much sense. The telnet client is a tiny program and having it in a directory on the path versus some other "turned off" directory doesn't seem to change the "footprint" much. The code to turn it on and off is probably bigger than telnet itself. And I'm not sure how a telnet client is much of a security risk. Any attack that gets far enough into your system to use your telnet client is probably not going to be stopped by it being "turned off", especially since it could easily "turn it on" from the command line. (I can see leaving the telnet service turned off, but that's not what I'm talking about.)
I'm really tempted to do some Microsoft bashing at this point, but I'll restrain myself.
Wednesday, October 10, 2007
Ask 37signals: Is it really the number of features that matter? - (37signals)
Ask 37signals: Is it really the number of features that matter?
The idea of "editing" is interesting. It's a role I play with our application software, both in deciding which features to add, and in reviewing what we've done to try to ensure it's usable.
Of course, I still struggle with trying to pursue simplicity. Sales people, customers, customer support - they all think more features is the answer.
The idea of "editing" is interesting. It's a role I play with our application software, both in deciding which features to add, and in reviewing what we've done to try to ensure it's usable.
Of course, I still struggle with trying to pursue simplicity. Sales people, customers, customer support - they all think more features is the answer.
Monday, October 08, 2007
Chumby Coming Soon, But Not Here
Soon you'll be able to buy a Chumby, but only in the United States, not in Canada :-(
I think one of the things that helped the internet spread so fast was that it wasn't restricted like this. Although when I went looking on the internet for a tv show that I missed I found Fox limits access to shows to just the US. And Amazon's MP3 sales are US only. So the internet is not necessarily free of restrictions either.
I think one of the things that helped the internet spread so fast was that it wasn't restricted like this. Although when I went looking on the internet for a tv show that I missed I found Fox limits access to shows to just the US. And Amazon's MP3 sales are US only. So the internet is not necessarily free of restrictions either.
Sunday, October 07, 2007
Ubuntu on Parallels on Mac
I just installed Ubuntu on my Mac under Parallels using these instructions:
How to install Ubuntu 7.04 in OS X using Parallels Desktop 3.0
It seemed to go smoothly. Of course, Ubuntu immediately wanted to download a pile of updates, but that worked ok.
The only problem is that you seem to have to manually turn on the networking within Ubuntu every time you boot it up. The instructions mentioned this, but I thought it was just during installation. There may be a fix, I haven't looked yet. I don't think it's a big deal because normally I just suspend the virtual machines rather than shutdown and restart them. Hmmm... that's assuming the network stays turned on after being suspended, I haven't actually tested that.
Parallels has a pre-installed Ubuntu virtual machine, but the downloads were very slow so I gave up. But I should be able to take the virtual machine I've now created and copy it from my Mac Mini to my Mac Book.
Aside from curiosity, one of my motivations for installing Ubuntu is to get back to working on porting Suneido to Linux. I should be able to do a lot of that natively under OS X but I assume I'll still want to work under Linux as well.
How to install Ubuntu 7.04 in OS X using Parallels Desktop 3.0
It seemed to go smoothly. Of course, Ubuntu immediately wanted to download a pile of updates, but that worked ok.
The only problem is that you seem to have to manually turn on the networking within Ubuntu every time you boot it up. The instructions mentioned this, but I thought it was just during installation. There may be a fix, I haven't looked yet. I don't think it's a big deal because normally I just suspend the virtual machines rather than shutdown and restart them. Hmmm... that's assuming the network stays turned on after being suspended, I haven't actually tested that.
Parallels has a pre-installed Ubuntu virtual machine, but the downloads were very slow so I gave up. But I should be able to take the virtual machine I've now created and copy it from my Mac Mini to my Mac Book.
Aside from curiosity, one of my motivations for installing Ubuntu is to get back to working on porting Suneido to Linux. I should be able to do a lot of that natively under OS X but I assume I'll still want to work under Linux as well.
Friday, October 05, 2007
MinGW GCC Version 4
I see MinGW (Minimalist GNU for Windows) now has a preview of GCC 4. (It looks like it came out in August, but I didn't hear about it.) I've been waiting for this for a while.
I haven't looked at it yet, but I keep hoping that MinGW GCC will close the gap with commercial compilers so I can use it for Suneido.
I haven't looked at it yet, but I keep hoping that MinGW GCC will close the gap with commercial compilers so I can use it for Suneido.
Thursday, October 04, 2007
The Big Rewrite
ChadFowler.com The Big Rewrite
I've been involved in a number of "big rewrites" over the years. The Suneido project could be called a rewrite of a previous in-house project called C4, which in turn was a "rewrite" of a framework built on top of a personal information manager called Lucid. Our accounting applications are also the third rewrite. So it is possible to pull it off. And I would bet some of Chad's projects have succeeded in the end. But his points about the dangers and problems of big rewrites are definitely valid.
Joel Spolsky calls the big rewrite "the single worst strategic mistake that any software company can make"
In the end it comes down to Software is Hard
I've been involved in a number of "big rewrites" over the years. The Suneido project could be called a rewrite of a previous in-house project called C4, which in turn was a "rewrite" of a framework built on top of a personal information manager called Lucid. Our accounting applications are also the third rewrite. So it is possible to pull it off. And I would bet some of Chad's projects have succeeded in the end. But his points about the dangers and problems of big rewrites are definitely valid.
Joel Spolsky calls the big rewrite "the single worst strategic mistake that any software company can make"
In the end it comes down to Software is Hard
Tuesday, October 02, 2007
Digital Music Not Quite Here Again
When Apple announced DRM-free music on the iTunes store I thought I'd finally be able to buy music without the roundabout system of buying a physical cd and ripping it to get the digital version which is all I use these days. But unfortunately, their selection of DRM free music is small and doesn't seem to be growing very fast. It seems like every time I go to look for something it isn't available.
So when Amazon announced DRM-free MP3's I was hopeful but knew the real test would be selection. I checked for a few things and although not everything was available it seemed better than iTunes (and cheaper, at least for some things).
BUT when I actually went to buy something I found out it's United States only. Argh!
I found someone on the web saying they had managed to purchase from Canada by giving a fake address but that didn't work for me. Maybe because I was logged in and therefore it knew where I was.
Note: My reasons for wanting DRM-free music have nothing to do with piracy. I just don't want the headaches of DRM, especially when I use multiple computers and music players. It seems crazy how much resistance the music companies are putting up against DRM-free sales, when they've been selling DRM music (i.e. cd's) all along.
Patience, grasshopper.
So when Amazon announced DRM-free MP3's I was hopeful but knew the real test would be selection. I checked for a few things and although not everything was available it seemed better than iTunes (and cheaper, at least for some things).
BUT when I actually went to buy something I found out it's United States only. Argh!
I found someone on the web saying they had managed to purchase from Canada by giving a fake address but that didn't work for me. Maybe because I was logged in and therefore it knew where I was.
Note: My reasons for wanting DRM-free music have nothing to do with piracy. I just don't want the headaches of DRM, especially when I use multiple computers and music players. It seems crazy how much resistance the music companies are putting up against DRM-free sales, when they've been selling DRM music (i.e. cd's) all along.
Patience, grasshopper.
Problems Embedding Google Maps in Blogger
I decided to add embedded Google Maps showing routes to some of my recent Sustainable Adventure posts but I had some problems.
First, it's quite hard to get the zoom/scale to come up properly on the embedded map. It would either be zoomed out too far or zoomed in too far. Presumably the zoom in the embedded map is related to the zoom when you ask for the link/embed code, but there doesn't seem to be a simple correlation. With trial and error, zooming in and out and resizing the window, I could usually get it more or less right.
In some cases it seemed to help to copy the link address, open a new tab/window and go to that address. I'm not sure why that would help. Maybe it was just coincidence that it happened to work after doing that.
One of my maps refused to show the route. The map itself would display (although missing one tile). Eventually, after playing with it and making a minor change to the route it started working. (I gave up on trying to get the zoom right on this one, I was happy enough to just get it to work.)
It's sad how much the success of an "expert user" comes down to trial and error and randomly poking things. The equivalent of kicking the machine. Isn't software supposed to be consistent and predictable? The problem is that if the software gets complex enough (as most software is this days) then it becomes impossible to control/know all the inputs and state, so it ends up seeming random, unpredictable, and inconsistent. Yuck!
I searched for other people having this problem but I didn't find much. Is it something I'm doing different? Issues related to Blogger? But lots of people use Blogger. Maybe I just didn't hit on the right search terms.
First, it's quite hard to get the zoom/scale to come up properly on the embedded map. It would either be zoomed out too far or zoomed in too far. Presumably the zoom in the embedded map is related to the zoom when you ask for the link/embed code, but there doesn't seem to be a simple correlation. With trial and error, zooming in and out and resizing the window, I could usually get it more or less right.
In some cases it seemed to help to copy the link address, open a new tab/window and go to that address. I'm not sure why that would help. Maybe it was just coincidence that it happened to work after doing that.
One of my maps refused to show the route. The map itself would display (although missing one tile). Eventually, after playing with it and making a minor change to the route it started working. (I gave up on trying to get the zoom right on this one, I was happy enough to just get it to work.)
It's sad how much the success of an "expert user" comes down to trial and error and randomly poking things. The equivalent of kicking the machine. Isn't software supposed to be consistent and predictable? The problem is that if the software gets complex enough (as most software is this days) then it becomes impossible to control/know all the inputs and state, so it ends up seeming random, unpredictable, and inconsistent. Yuck!
I searched for other people having this problem but I didn't find much. Is it something I'm doing different? Issues related to Blogger? But lots of people use Blogger. Maybe I just didn't hit on the right search terms.
Thursday, September 27, 2007
More Mac & Parallels Printing Problems
This is a continuation of the saga I've posted about before.
Recap: I had moved my Epson R1800 printer to Firewire to free up the USB port. But then I found out Parallels doesn't virtualize Firewire.
So I moved the printer back to USB. Things seem ok. Some time later I go to print from Lightroom and I find that I'm back to the CUPS+Gutenprint printer driver which doesn't support all the printer features. I'm not sure how that happened since before the USB->Firewire switch it was using the Epson driver.
More time passes and I go to print from Windows under Parallels. The print job goes into the queue and gets stuck. I can't even delete the print job or the printer. I try disconnecting and reconnecting the USB port to the Parallels VM. I try rebooting Windows. I try rebooting the Mac. At some point during this thrashing my print job comes out and the printer deletes. I recreate the printer, thinking that it's now going to work, but my test gets stuck in the queue as before. I thrash some more, but can't seem to hit on whatever combination it was that released that first print job.
I search on the web and find various discussions of various problems more or less related to mine. One person says it takes 5 minutes for his print jobs to emerge. Maybe this was what happened with me - not anything I did while thrashing, just the amount of time I thrashed. It seems bizarre that it would take 5 minutes. What is it doing all that time? And why does it work after whatever it is doing?
I had gotten an error message about the Epson Status Monitor 3 (why 3?) and some of the discussions mentioned this. I tried disabling it and killing it etc. but it didn't seem to make much difference.
The strange thing is that I could swear I had the printer working from Windows via USB before I switched to Firewire. So why doesn't it work now? Changes to Parallels? Who knows.
Eventually I found Finally got Epson Photo R800 printer working in XP VM on the Parallels forums. (an R800 is the narrow carriage version of my R1800) It gave instructions on how to share the printer from OS X and then access it using Bonjour on Windows. I had run across references to this before but they all talked about using a generic postscript driver on Windows which is not what I wanted. I needed to use the Epson driver to access the printers features. But these instructions used the Epson driver.
The instructions were for the printer connected by USB but they mentioned they would likely work with Firewire. Since that's what I really wanted I thought I'd try it. Nope, couldn't get it to work. (no URI for Firewire device?) Back to USB. This time it worked.
The only place where I had problems with the instructions was with choosing the printer type. Bonjour only showed the generic postscript option. I had to choose Have Disk and then find and select my Epson driver.
Several hours later, I think I have the correct, functioning printer setup in both OS X and Windows. And even better, I think I should now be able to use the same method to access the printer from other networked Windows machines.
A few days ago someone suggested buying a MacBook for someone else. I said it probably wasn't a good idea because they'd want to run Windows programs. They said "I thought you could do that now?". Yeah, well, you can, but ... it can get ugly.
Recap: I had moved my Epson R1800 printer to Firewire to free up the USB port. But then I found out Parallels doesn't virtualize Firewire.
So I moved the printer back to USB. Things seem ok. Some time later I go to print from Lightroom and I find that I'm back to the CUPS+Gutenprint printer driver which doesn't support all the printer features. I'm not sure how that happened since before the USB->Firewire switch it was using the Epson driver.
More time passes and I go to print from Windows under Parallels. The print job goes into the queue and gets stuck. I can't even delete the print job or the printer. I try disconnecting and reconnecting the USB port to the Parallels VM. I try rebooting Windows. I try rebooting the Mac. At some point during this thrashing my print job comes out and the printer deletes. I recreate the printer, thinking that it's now going to work, but my test gets stuck in the queue as before. I thrash some more, but can't seem to hit on whatever combination it was that released that first print job.
I search on the web and find various discussions of various problems more or less related to mine. One person says it takes 5 minutes for his print jobs to emerge. Maybe this was what happened with me - not anything I did while thrashing, just the amount of time I thrashed. It seems bizarre that it would take 5 minutes. What is it doing all that time? And why does it work after whatever it is doing?
I had gotten an error message about the Epson Status Monitor 3 (why 3?) and some of the discussions mentioned this. I tried disabling it and killing it etc. but it didn't seem to make much difference.
The strange thing is that I could swear I had the printer working from Windows via USB before I switched to Firewire. So why doesn't it work now? Changes to Parallels? Who knows.
Eventually I found Finally got Epson Photo R800 printer working in XP VM on the Parallels forums. (an R800 is the narrow carriage version of my R1800) It gave instructions on how to share the printer from OS X and then access it using Bonjour on Windows. I had run across references to this before but they all talked about using a generic postscript driver on Windows which is not what I wanted. I needed to use the Epson driver to access the printers features. But these instructions used the Epson driver.
The instructions were for the printer connected by USB but they mentioned they would likely work with Firewire. Since that's what I really wanted I thought I'd try it. Nope, couldn't get it to work. (no URI for Firewire device?) Back to USB. This time it worked.
The only place where I had problems with the instructions was with choosing the printer type. Bonjour only showed the generic postscript option. I had to choose Have Disk and then find and select my Epson driver.
Several hours later, I think I have the correct, functioning printer setup in both OS X and Windows. And even better, I think I should now be able to use the same method to access the printer from other networked Windows machines.
A few days ago someone suggested buying a MacBook for someone else. I said it probably wasn't a good idea because they'd want to run Windows programs. They said "I thought you could do that now?". Yeah, well, you can, but ... it can get ugly.
Monday, September 10, 2007
School
Paul Graham's latest post, News from the Front, was irrationally comforting.
I've often wondered if I should have taken a high school biology teacher's advice and applied to somewhere like MIT. My life obviously would have been very different, but also, obviously, it wouldn't have made me any smarter. Heck, I didn't even finish university, and I don't think that has hurt me any either. (Aside from my father's disappointment, and even he came around.)
On the other hand, I might have met different people by being somewhere more "active" than the middle of nowhere in the Saskatchewan prairies.
I don't have any regrets, but it's interesting to wonder about.
I've often wondered if I should have taken a high school biology teacher's advice and applied to somewhere like MIT. My life obviously would have been very different, but also, obviously, it wouldn't have made me any smarter. Heck, I didn't even finish university, and I don't think that has hurt me any either. (Aside from my father's disappointment, and even he came around.)
On the other hand, I might have met different people by being somewhere more "active" than the middle of nowhere in the Saskatchewan prairies.
I don't have any regrets, but it's interesting to wonder about.
Subscribe to:
Posts (Atom)