Thursday, June 14, 2007
Friday, June 08, 2007
Slow Code Again
Although I agree with Larry's view, at the same time I continue to be horrified by some of the code I come across. There's no question premature optimization is bad, but I have to think so is code that blatantly disregards and abuses speed issues.
For some time I have been meaning to look at the start up process for our application. There was no particular reason for this - no one had been complaining. But speed of start up is one of the first things people encounter and first impressions can be important. And we had had some complaints and found some problems with opening the help being slow, and some of the code is similar.
The first thing I looked at was database queries since they often dominate speed issues. I turned on the query tracing during startup and to my horror, ended up with 1800 lines of query trace! At first I thought that was 1800 queries, but there are multiple lines per query so it was actually only about 600 queries. It's a testament to the speed of computers and networks these days, that this doesn't even result in a noticeable slowdown. Although I think we did have one customer complain that after restarting their server, when all the users tried to log back in all at once, it was slow. I can understand why! Of course, we discounted it, thinking that, of course it'll be slow.
I have various ideas for reducing these startup queries, but I was able to cut them in half within a few minutes. Most of these queries are reading the menu tree for the application. At the same time it is checking for any menu options that are "hidden" (e.g. not purchased by that customer). To do this, the code was calling a general purpose permissions function which did queries on the permission table. But in this case all we needed to know was whether an option was hidden, which depended only on some configuration data in the code. We had a more specific function that only checked the configuration so switching to call this (didn't even need to change the arguments!) instantly eliminated almost half (about 300) of the queries.
I don't give this as an example of someone "screwing up". We all make mistakes or overlook things. The lesson that I take from it is that it's not enough to write some code and have it appear to work. You have to assume that there are always hidden problems and that you need to constantly be on the lookout for them. And constantly fighting entropy by fixing and refactoring.
See also my previous post on Slow Code
For some time I have been meaning to look at the start up process for our application. There was no particular reason for this - no one had been complaining. But speed of start up is one of the first things people encounter and first impressions can be important. And we had had some complaints and found some problems with opening the help being slow, and some of the code is similar.
The first thing I looked at was database queries since they often dominate speed issues. I turned on the query tracing during startup and to my horror, ended up with 1800 lines of query trace! At first I thought that was 1800 queries, but there are multiple lines per query so it was actually only about 600 queries. It's a testament to the speed of computers and networks these days, that this doesn't even result in a noticeable slowdown. Although I think we did have one customer complain that after restarting their server, when all the users tried to log back in all at once, it was slow. I can understand why! Of course, we discounted it, thinking that, of course it'll be slow.
I have various ideas for reducing these startup queries, but I was able to cut them in half within a few minutes. Most of these queries are reading the menu tree for the application. At the same time it is checking for any menu options that are "hidden" (e.g. not purchased by that customer). To do this, the code was calling a general purpose permissions function which did queries on the permission table. But in this case all we needed to know was whether an option was hidden, which depended only on some configuration data in the code. We had a more specific function that only checked the configuration so switching to call this (didn't even need to change the arguments!) instantly eliminated almost half (about 300) of the queries.
I don't give this as an example of someone "screwing up". We all make mistakes or overlook things. The lesson that I take from it is that it's not enough to write some code and have it appear to work. You have to assume that there are always hidden problems and that you need to constantly be on the lookout for them. And constantly fighting entropy by fixing and refactoring.
See also my previous post on Slow Code
Bad Documentation
The easiest way to write software documentation is:
Of course, it's often thinly disguised. Here are some examples (from real documentation I was given yesterday):
The problem is that to write useful documentation is orders of magnitude harder. It requires you to think. You need to decide who the documentation is for, what you hope to achieve with it, what things people need to be told, and just as importantly, what things they do not need to be told.
I think another factor is that people hate to leave "gaps". Rather than just describe the few fields that you actually have something worthwhile to say about, they feel they have to say something about every single field. And this is not just obsessive-compulsive people - almost everyone seems to feel uncomfortable about "leaving out" some fields.
Date FieldNeedless to say this is very obviously useless. And it's actually worse than useless because any useful information is buried deeply in masses of this garbage. No one would actually write this kind of stuff, would they? Yes, they would and do. And in an amazing example of how we can justify justify anything to ourselves, writers of this kind of documentation actually appear to believe they have produced something useful. (Of course, if someone else had written it, that same person would have no problem recognizing it as useless.)
Enter the date.
Rate Field
Enter the rate.
Of course, it's often thinly disguised. Here are some examples (from real documentation I was given yesterday):
In the Type field, choose the Type this record applies to.I'm sure you get the idea.
Tab to the Fleet field. Choose the correct Fleet.
Tab to the Abbreviation field and enter the Abbreviation.
The problem is that to write useful documentation is orders of magnitude harder. It requires you to think. You need to decide who the documentation is for, what you hope to achieve with it, what things people need to be told, and just as importantly, what things they do not need to be told.
I think another factor is that people hate to leave "gaps". Rather than just describe the few fields that you actually have something worthwhile to say about, they feel they have to say something about every single field. And this is not just obsessive-compulsive people - almost everyone seems to feel uncomfortable about "leaving out" some fields.
Google Desktop versus Mac Spotlight versus Vista
One thing that frustrated me when I started using Google Desktop Search was that I would type my search, it would correctly identify and highlight the top result, but when I hit Enter it would open my browser and show me the search results there (instead of just running/opening the top result). Eventually I discovered you could change this in the Preferences "Launch Programs by Default" (instead of "Search by Default").
I guess I should mention that I use Desktop Search as much or more to launch programs rather than find files. My Windows Start menu has so many programs on it that it is a hassle to use.
Now I have the same hassle on my Mac. Spotlight finds the right result, but when I hit Enter it brings up a search window instead of running the top result. Unfortunately, so far I have not found a setting to change this. It hasn't been too annoying yet because I don't have as much software installed on my Mac so I don't need to use it as much.
As much as I like to dislike Microsoft, they appear to have got this right in Vista. The new search box defaults to running the top result when you hit Enter.
I wonder how Beagle on Linux works in this respect?
PS. I normally have my Google Desktop set to show on my Windows task bar, but today it was missing. I thought it must have crashed or something so I rebooted. But still no search box. I checked whether it was still set to run on startup but that looked ok. When I tried running it manually from the Start menu it displayed in "pop up" mode, but came up so fast it must have already been running. When I checked my preferences I found it was set to not show up. I'm pretty sure I never changed this, so I'm not sure what happened. Maybe some automatic update turned this off?
I guess I should mention that I use Desktop Search as much or more to launch programs rather than find files. My Windows Start menu has so many programs on it that it is a hassle to use.
Now I have the same hassle on my Mac. Spotlight finds the right result, but when I hit Enter it brings up a search window instead of running the top result. Unfortunately, so far I have not found a setting to change this. It hasn't been too annoying yet because I don't have as much software installed on my Mac so I don't need to use it as much.
As much as I like to dislike Microsoft, they appear to have got this right in Vista. The new search box defaults to running the top result when you hit Enter.
I wonder how Beagle on Linux works in this respect?
PS. I normally have my Google Desktop set to show on my Windows task bar, but today it was missing. I thought it must have crashed or something so I rebooted. But still no search box. I checked whether it was still set to run on startup but that looked ok. When I tried running it manually from the Start menu it displayed in "pop up" mode, but came up so fast it must have already been running. When I checked my preferences I found it was set to not show up. I'm pretty sure I never changed this, so I'm not sure what happened. Maybe some automatic update turned this off?
Tuesday, June 05, 2007
Chapters web site annoyance
Lately I've been ordering from Chapters more often than Amazon because they seem to have more books in stock and deliver them more quickly. (This is in Canada i.e. amazon.ca)
But one part of ordering from Chapters really bugs me. Their web site does not understand that billing address belongs to the credit card. I order books both for work and personally. The billing address for my work credit card is different from the billing address for my personal credit card. Amazon handles this automatically. But Chapters not only doesn't handle this automatically, it makes me type in the address every time I switch between business/personal. They store multiple shipping addresses and multiple credit cards and let me pick from them rather than type them in, but apparently they only store a single billing address.
I guess I could set up a second account but I've got enough different accounts to keep track of!
It scares me to wonder how many similar annoyances our applications have that we're either unaware of or are ignoring.
But one part of ordering from Chapters really bugs me. Their web site does not understand that billing address belongs to the credit card. I order books both for work and personally. The billing address for my work credit card is different from the billing address for my personal credit card. Amazon handles this automatically. But Chapters not only doesn't handle this automatically, it makes me type in the address every time I switch between business/personal. They store multiple shipping addresses and multiple credit cards and let me pick from them rather than type them in, but apparently they only store a single billing address.
I guess I could set up a second account but I've got enough different accounts to keep track of!
It scares me to wonder how many similar annoyances our applications have that we're either unaware of or are ignoring.
Stevey's Blog Rants: The Next Big Language
An interesting (and cynically humorous) article about programming languages:
Stevey's Blog Rants: The Next Big Language
Stevey's Blog Rants: The Next Big Language
Sunday, June 03, 2007
Adobe Lightroom
I continue to be impressed with Adobe Lightroom. Like Tim Bray, although I prefer open source software, I can't help but like Lightroom.
I was also impressed by The Adobe Photoshop Lightroom Book by Martin English. Usually I'm not that impressed with books on how to use software. They're either reference manuals or beginner's guides. This book is neither. It gives helpful advice, covers both basics and more advanced features, and is full of examples and annotated screen-shots. Lightroom was designed to be "simple to use". But it still has a ton of features, so "simple to use" means lots of non-obvious features - like things you can click on or drag. The book helped me discover these a lot faster than I would have on my own.
One of the reasons I like Lightroom is that there are lots of innovative user interface features. It's not "eye candy" - the UI is very subdued black and grey (after all what's important is your pictures, not the software) Even if you're not interested in digital photography, I'd recommend spending some time with the trial version just to see how the UI works, although you may not discover all the features in a short test. The UI has already inspired some improvements in Suneido.
One of the reasons I like Lightroom is that there are lots of innovative user interface features. It's not "eye candy" - the UI is very subdued black and grey (after all what's important is your pictures, not the software) Even if you're not interested in digital photography, I'd recommend spending some time with the trial version just to see how the UI works, although you may not discover all the features in a short test. The UI has already inspired some improvements in Suneido.
Thursday, May 31, 2007
Windows GetSaveFileName Frustration
GetSaveFileName is a Windows API call that displays a standard save file dialog. It has an option to specify a default file name extension that will be added if the user doesn't type one. (lpstrDefExt in the OPENFILENAME struct)
For some reason, this wasn't working. I wasted a bunch of time checking all my code and searching for known problems on the internet with no success.
Then I tried it again and suddenly it worked! But I hadn't changed anything!
Aha! I had changed something - I had typed a different file name. The problem is that if the name you type (with no extension) exists, then it doesn't add the extension. (e.g. I was typing "test" and I had an existing file called "test", with no extension) I guess it assumes that you are "picking" the existing file, even though I was typing it, not picking it from a list.
MSDN contains a huge amount of documentation, but all it takes is one little missing detail and you're in trouble.
For some reason, this wasn't working. I wasted a bunch of time checking all my code and searching for known problems on the internet with no success.
Then I tried it again and suddenly it worked! But I hadn't changed anything!
Aha! I had changed something - I had typed a different file name. The problem is that if the name you type (with no extension) exists, then it doesn't add the extension. (e.g. I was typing "test" and I had an existing file called "test", with no extension) I guess it assumes that you are "picking" the existing file, even though I was typing it, not picking it from a list.
MSDN contains a huge amount of documentation, but all it takes is one little missing detail and you're in trouble.
Monday, May 28, 2007
LINA portable applications
It's not released yet, but LINA looks like a really interesting project that enables you to run the same binaries on Linux, Windows, and Mac with native look and feel.
Sunday, May 13, 2007
Inside the Machine
I'm not much of a hardware person these days, but it was pretty interesting to read about the techniques used to push performance. Apart from bigger caches, I had wondered how modern CPU's used the 100's of millions of transistors they now contain. (A Core 2 Duo has 291 million transistors as compared to the original Pentium's 3 million.)
Considering the subject, the book is well written and easy to read. If you're at all interested in this area, I'd recommend it.
Google Sketchup
Google Sketchup is a free 3D design tool (there's also a Pro version that costs money). I haven't got a particular use for it, but I thought I'd check it out. Here is the results of my first 10 minutes with it:
You can put your Sketchup models onto Google Earth.

Thursday, May 03, 2007
Threading Horrors
Just when I was starting to think that it might not be too bad to make Suneido multi-threaded...
Quick overview of how processes exit on Windows XP
I like this part:
Quick overview of how processes exit on Windows XP
I like this part:
Note that I just refer to it as the way processes exit on Windows XP rather than saying that it is how process exit is designed. As one of my colleagues put it, "Using the word design to describe this is like using the term swimming pool to refer to a puddle in your garden."I'm not familiar enough with Linux to know how it compares, but even if it is much "cleaner" I suspect it still has it's own "gotchas".
Fields, Controls, & Context
In Suneido, a field has an associated control. For example, if the field is "category" the control might be a pull down list of categories.
For the most part this works well. The problem is that in certain contexts you want a different control or to modify the behavior of the control slightly.
For example, when you are entering new categories it doesn't make any sense to have a control that only lets you choose from existing ones. Actually, you probably want the opposite - to validate that what you enter is not an existing category (i.e. a duplicate).
These are the two main contexts - creating and using.
Once you use a category, possibly as a foreign key, you don't normally want to delete it. But business changes and you may not want to use a category any longer, in which case you don't want it showing up in lists to choose from. We would usually do this by adding an "active/inactive" flag to the categories.
However, when you go to view or modify old data, you don't want it to become invalid because it uses a category that was later marked "inactive".
One solution is to record the date when the category became inactive and so it can be valid for data prior to that date, but invalid after that date. But this means either you make the user enter the date when it became inactive (extra work for the user) or use the current date when they mark it as invalid (but that may not be the right date). The other problem is that if the record where it's used either doesn't have a date, or has several dates, then it becomes hard to apply this. (We use this solution in certain cases where it "makes sense" to users e.g. a termination date on an employee.)
The solution we normally use is to make the inactive categories valid in existing data, but only allow active categories on new data. If there is a list to choose from, it would only show the active categories, on the assumption that you're picking for new data.
This splits the "using" context into "using on existing data" and "using on new data".
But when we come to reports (or queries) there is another problem. If you are printing a report on old data, you want to be able to select currently inactive categories. But at the same time, if you pull down a list of categories on the report options you don't really want to see every category that has ever existed. Most of the time you're printing current data and you only want to choose from the active categories.
Our normal "compromise" on reports is that lists to choose from only show active categories, but the validation will allow you to type in an inactive category.
Another alternative might be to add an option (e.g. a checkbox) to the pull down lists that lets you show/hide inactive categories.
If you use an "inactive as of" date, you still have this problem. You can't use the transaction dates because you're printing a range of dates and different categories will be active for different transactions.
So we now have four contexts:
Currently, Suneido associates a field with a control by the field name. We normally make the "default" control one that handles the "using on old/new data" context. For creating and reports we rename the field to get different controls.
A better approach might be to make the context more explicit. You could allow associating multiple controls with a field name, based on the context. Or the control could be aware of its context and adjust its behavior accordingly. (We partially do this to handle the "using on old data" versus "using on new data" contexts.)
I'm curious how other user interface systems handle these issues. It wouldn't be hard to get a copy of e.g. QuickBooks and see how they address these issues (assuming they do). It's not something that I have seen written up in any of the user interface guides that I've read.
For the most part this works well. The problem is that in certain contexts you want a different control or to modify the behavior of the control slightly.
For example, when you are entering new categories it doesn't make any sense to have a control that only lets you choose from existing ones. Actually, you probably want the opposite - to validate that what you enter is not an existing category (i.e. a duplicate).
These are the two main contexts - creating and using.
Once you use a category, possibly as a foreign key, you don't normally want to delete it. But business changes and you may not want to use a category any longer, in which case you don't want it showing up in lists to choose from. We would usually do this by adding an "active/inactive" flag to the categories.
However, when you go to view or modify old data, you don't want it to become invalid because it uses a category that was later marked "inactive".
One solution is to record the date when the category became inactive and so it can be valid for data prior to that date, but invalid after that date. But this means either you make the user enter the date when it became inactive (extra work for the user) or use the current date when they mark it as invalid (but that may not be the right date). The other problem is that if the record where it's used either doesn't have a date, or has several dates, then it becomes hard to apply this. (We use this solution in certain cases where it "makes sense" to users e.g. a termination date on an employee.)
The solution we normally use is to make the inactive categories valid in existing data, but only allow active categories on new data. If there is a list to choose from, it would only show the active categories, on the assumption that you're picking for new data.
This splits the "using" context into "using on existing data" and "using on new data".
But when we come to reports (or queries) there is another problem. If you are printing a report on old data, you want to be able to select currently inactive categories. But at the same time, if you pull down a list of categories on the report options you don't really want to see every category that has ever existed. Most of the time you're printing current data and you only want to choose from the active categories.
Our normal "compromise" on reports is that lists to choose from only show active categories, but the validation will allow you to type in an inactive category.
Another alternative might be to add an option (e.g. a checkbox) to the pull down lists that lets you show/hide inactive categories.
If you use an "inactive as of" date, you still have this problem. You can't use the transaction dates because you're printing a range of dates and different categories will be active for different transactions.
So we now have four contexts:
- creating - no list to choose from, duplicates are invalid
- using on "old" data - inactive values are valid for existing data
- using on "new" data - only active values are valid
- using in report selections - allow entering inactive values
Currently, Suneido associates a field with a control by the field name. We normally make the "default" control one that handles the "using on old/new data" context. For creating and reports we rename the field to get different controls.
A better approach might be to make the context more explicit. You could allow associating multiple controls with a field name, based on the context. Or the control could be aware of its context and adjust its behavior accordingly. (We partially do this to handle the "using on old data" versus "using on new data" contexts.)
I'm curious how other user interface systems handle these issues. It wouldn't be hard to get a copy of e.g. QuickBooks and see how they address these issues (assuming they do). It's not something that I have seen written up in any of the user interface guides that I've read.
Monday, April 30, 2007
Embedding Google My Maps
Here's a useful tool for embedding the maps you create with Google My Maps:
http://www.dr2ooo.com/tools/maps/
Here's an example:
http://sustainableadventure.blogspot.com/2007/04/eagle-creek-paddle.html
The embedded map actually uses the dr2000 web site so I'm not sure about the long term stability. Presumably it's not too hard to do this yourself, but I haven't figured that out yet.
http://www.dr2ooo.com/tools/maps/
Here's an example:
http://sustainableadventure.blogspot.com/2007/04/eagle-creek-paddle.html
The embedded map actually uses the dr2000 web site so I'm not sure about the long term stability. Presumably it's not too hard to do this yourself, but I haven't figured that out yet.
Wednesday, April 25, 2007
Custom Google Maps
Google Maps recently added the ability to put points and routes onto Google Maps and save the results. It's really easy to use. You can even attach photos or videos. (Note: you'll need a Google account, if you use Gmail you already have one, if not, it's free and easy to register.)
For example, here's one of my regular running routes:
Saskatoon Running Route
Of course, it can be used for a lot of other things - check out some of their featured maps.
For example, here's one of my regular running routes:
Saskatoon Running Route
Of course, it can be used for a lot of other things - check out some of their featured maps.
Friday, April 20, 2007
Mac + printers
Ever since I got my Mac Mini I've been struggling with the printer issue. I have an Epson 2200 hooked up to a Windows machine. It took a fair bit of research to figure out how to connect to it from OS X, but I finally managed it. It is supposed to be easy, but it looks like a lot of people have problems. But I could only use the Gutenprint (formerly Gimp-Print) drivers which don't support all the features of the printer.
I downloaded the latest Mac OS X drivers from Epson, but I couldn't see how to use them. Finally I found out that you can't use USB drivers on a networked printer. This seems like a strange distinction - on Windows I can use the same drivers whether the printer is connected directly or networked. Maybe it works if the printer is shared from another Mac - I don't have two Mac's to try it.
I thought a network print server might do the trick but from what I could find out, I'd still have problems. It looks like Apple's new Extreme Air Port might handle it a bit better, but it still wouldn't let you run the Epson utilities (ink level, cleaning, etc.). And although they claim it's Windows compatible I wouldn't be surprised if there were issues.
In the end I physically moved the printer and connected it directly to the Mac. Rather than fight with sharing it from the Mac and somehow connecting from Windows I just went out and bought a new printer for the Windows machine. (I wanted the large format 2200 on the Mac since that's where I plan to print photo enlargements from.) I bought an Epson R260. (Epson's may or may not be the best, but I'm familiar with them.) It amazes me that a printer that has a resolution of several thousand dpi and produces 1.5 picoliter droplets costs only $120! I realize they make their money on the ink, but it's still amazing price/performance relative to a few years ago. Of course, I'd like the newer Epson R2400 to replace the 2200 but that'll have to wait.
It seems strange that Parallels and VMware can virtualize an entire computer, but for some reason OS X printer drivers are tied to hardware. I'm sure there are "good technical reasons" for this, but it seems pretty crappy to me. The Mac seems to lose to Windows on this front.
I downloaded the latest Mac OS X drivers from Epson, but I couldn't see how to use them. Finally I found out that you can't use USB drivers on a networked printer. This seems like a strange distinction - on Windows I can use the same drivers whether the printer is connected directly or networked. Maybe it works if the printer is shared from another Mac - I don't have two Mac's to try it.
I thought a network print server might do the trick but from what I could find out, I'd still have problems. It looks like Apple's new Extreme Air Port might handle it a bit better, but it still wouldn't let you run the Epson utilities (ink level, cleaning, etc.). And although they claim it's Windows compatible I wouldn't be surprised if there were issues.
In the end I physically moved the printer and connected it directly to the Mac. Rather than fight with sharing it from the Mac and somehow connecting from Windows I just went out and bought a new printer for the Windows machine. (I wanted the large format 2200 on the Mac since that's where I plan to print photo enlargements from.) I bought an Epson R260. (Epson's may or may not be the best, but I'm familiar with them.) It amazes me that a printer that has a resolution of several thousand dpi and produces 1.5 picoliter droplets costs only $120! I realize they make their money on the ink, but it's still amazing price/performance relative to a few years ago. Of course, I'd like the newer Epson R2400 to replace the 2200 but that'll have to wait.
It seems strange that Parallels and VMware can virtualize an entire computer, but for some reason OS X printer drivers are tied to hardware. I'm sure there are "good technical reasons" for this, but it seems pretty crappy to me. The Mac seems to lose to Windows on this front.
Friday, April 13, 2007
Three Stages of Design
I was listening to a podcast about software design and they were quoting Steve Jobs. So this is third or fourth hand and probably garbled from the original.
Stage One is where you are new to a domain and it seems simple and you design a simple solution. But the simplicity is really a lack of understanding so your design, while simple, is not very good.
Stage Two is where you see the complexity of a domain and you end up with a complex design. It might handle lots of things but the complexity makes it hard to learn and use.
Stage Three is where you figure out how to make a simple design that still addresses the complexity of the domain. This is the elegant solution. It doesn't have every conceivable feature, but it handles the important stuff for a majority of users. For example, the iPod. Lots of other music players have more features, but the iPod hits that sweet spot balancing simplicity and features. (100 million buyers testify to that)
Back when my company was doing custom software development for a wide variety of domains, a lot of our products were Stage One. A few progressed to Stage Two. Even now that we're focused on one domain, our product is still definitely Stage Two. Suneido, our development tool has some Stage Three aspects but doesn't really qualify overall.
Stage Three is hard. And there doesn't seem to be any formula for achieving it.
Stage One is where you are new to a domain and it seems simple and you design a simple solution. But the simplicity is really a lack of understanding so your design, while simple, is not very good.
Stage Two is where you see the complexity of a domain and you end up with a complex design. It might handle lots of things but the complexity makes it hard to learn and use.
Stage Three is where you figure out how to make a simple design that still addresses the complexity of the domain. This is the elegant solution. It doesn't have every conceivable feature, but it handles the important stuff for a majority of users. For example, the iPod. Lots of other music players have more features, but the iPod hits that sweet spot balancing simplicity and features. (100 million buyers testify to that)
Back when my company was doing custom software development for a wide variety of domains, a lot of our products were Stage One. A few progressed to Stage Two. Even now that we're focused on one domain, our product is still definitely Stage Two. Suneido, our development tool has some Stage Three aspects but doesn't really qualify overall.
Stage Three is hard. And there doesn't seem to be any formula for achieving it.
Thursday, April 12, 2007
Friday, March 30, 2007
ETech 2007 Last Day
We started off with a few interesting keynotes. One on Adobe's new Apollo platform - an alternative desktop runtime for web apps (HTML, CSS, JavaScript, Flash). It looks pretty neat, especially the features for running apps when you're offline (not connected to the internet). But is HTML/CSS/JavaScript the best way to write apps? I'm not sure.
Next, Google gave a presentation on their project to add 1.6 MW of solar power at their headquarters. They also talked about other environmentally friendly practices at Google. Again, it seems Google is trying hard to not be evil despite their huge size.
After the break I went to a session by Andy Kessler on how Moore's law will soon be "invading" medicine - leading to better and cheaper health care. He was an entertaining speaker.
James Duncan's session on JavaScript and Zimki was quite interesting. He talked about some features of JavaScript that I wasn't aware of. Zimki is a JavaScript server and web app framework with some novel features. Fotongo offers paid hosting for Zimki, but it will also be released open source in the next few months.
At lunch I discovered a new coffee shop near the hotel - Brickyard. Although Starbucks is a good default, I like to find local shops especially if they have better coffee! It didn't hurt that it was another beautiful day and I could sit outside in their courtyard and enjoy the sun.
The sessions were thinning out by the afternoon. I went to one on why you should try to design your web app so it could be run as a text adventure (sounds crazy but actually made some sense) I hadn't recognized the presenter's name but he turned out to be the guy who had presented a Rails game (Unroll) at OSCon. He's an interesting character so I was glad I'd gone to this session although it ended up being quite short.
My last session was by Forest Higgs on building your own 3D printer. Commercial 3D printers still cost tens of thousands of dollars to buy and require expensive consumables. You can now build your own for a few hundred dollars. Forest has his own design but he also talked about the Reprap project. The goal is not just to make an open source 3D printer design, but also one that can replicate (most of) itself. (They use microprocessors which obviously can't be manufactured by a home machine yet!) He also talked about the implications of widespread grass roots manufacturing capabilities. Thought provoking.
And that was it for ETech 2007. Although I heard a lot of grumbling that it wasn't as good as previous years, I still think it was worthwhile. Lots of new ideas that will help fuel my brain.
I rounded off the day with supper at The Fish Market. I couldn't be bothered to wait for a table in the restaurant (long lineup) so I grabbed a table in the bar with a great view of the water. There was a limited menu in the bar but the fish and chips was the best I'd had in a long time, the waitress was cute and cheerful, and the sunset was beautiful - what more could you ask for!
Next, Google gave a presentation on their project to add 1.6 MW of solar power at their headquarters. They also talked about other environmentally friendly practices at Google. Again, it seems Google is trying hard to not be evil despite their huge size.
After the break I went to a session by Andy Kessler on how Moore's law will soon be "invading" medicine - leading to better and cheaper health care. He was an entertaining speaker.
James Duncan's session on JavaScript and Zimki was quite interesting. He talked about some features of JavaScript that I wasn't aware of. Zimki is a JavaScript server and web app framework with some novel features. Fotongo offers paid hosting for Zimki, but it will also be released open source in the next few months.
At lunch I discovered a new coffee shop near the hotel - Brickyard. Although Starbucks is a good default, I like to find local shops especially if they have better coffee! It didn't hurt that it was another beautiful day and I could sit outside in their courtyard and enjoy the sun.
The sessions were thinning out by the afternoon. I went to one on why you should try to design your web app so it could be run as a text adventure (sounds crazy but actually made some sense) I hadn't recognized the presenter's name but he turned out to be the guy who had presented a Rails game (Unroll) at OSCon. He's an interesting character so I was glad I'd gone to this session although it ended up being quite short.
My last session was by Forest Higgs on building your own 3D printer. Commercial 3D printers still cost tens of thousands of dollars to buy and require expensive consumables. You can now build your own for a few hundred dollars. Forest has his own design but he also talked about the Reprap project. The goal is not just to make an open source 3D printer design, but also one that can replicate (most of) itself. (They use microprocessors which obviously can't be manufactured by a home machine yet!) He also talked about the implications of widespread grass roots manufacturing capabilities. Thought provoking.
And that was it for ETech 2007. Although I heard a lot of grumbling that it wasn't as good as previous years, I still think it was worthwhile. Lots of new ideas that will help fuel my brain.
I rounded off the day with supper at The Fish Market. I couldn't be bothered to wait for a table in the restaurant (long lineup) so I grabbed a table in the bar with a great view of the water. There was a limited menu in the bar but the fish and chips was the best I'd had in a long time, the waitress was cute and cheerful, and the sunset was beautiful - what more could you ask for!
Wednesday, March 28, 2007
Amazon S3 with cURL?
As I've talked about in previous posts, I've been searching for a good way to access Amazon S3 from Suneido (to use for backing up our customer's databases).
The SmugMug presentation recommended using cURL. We already use cURL for a variety of tasks (such as FTP) so we'd be happy to use this option. But S3 requires SHA-1 hashes which I didn't think cURL could do. Maybe you can calculate the hashes separately and use cURL just for the transfer. I'll have to look into this.
The SmugMug presentation recommended using cURL. We already use cURL for a variety of tasks (such as FTP) so we'd be happy to use this option. But S3 requires SHA-1 hashes which I didn't think cURL could do. Maybe you can calculate the hashes separately and use cURL just for the transfer. I'll have to look into this.
Subscribe to:
Posts (Atom)