Tuesday, March 29, 2016

Yak Shaving

"Yak shaving" is a programmer's slang term for the distance between a task's start and completion and the tangential tasks between you and the solution. If you ever wanted to mail a letter, but couldn't find a stamp, and had to drive your car to get the stamp, but also needed to refill the tank with gas, which then let you get to the post office where you could buy a stamp to mail your letter—then you've done some yak shaving. - Zed Shaw
Yak shaving is one of my least favorite and most frustrating parts of software development. I have vague memories of enjoying the chase when I was younger, but my tolerance for it seems to decrease the older I get. Nowadays it just pisses me off.

I haven't worked on suneido.js for a while but I got a pull request from another programmer that's been helping me with it. I merged the pull request without any problems. Then I went to try out the changes.

I get a cryptic JavaScript error. Eventually I figure out this is a module problem. I see that my tcconfig.js is set to commonjs, which is what is needed to run the tests in node.js, but to run in the browser I need amd. I change it. I love JavaScript modules.

Now I need to recompile my TypeScript. I happen to be using Visual Studio Code, which follows the latest trend in editors to not have any menu options that are discoverable. Instead you need to type commands. I eventually find the keyboard shortcut Ctrl+Shift+B. (After some confusion because I was reading the web site on OS X and it auto-magically only shows you the keyboard shortcuts for the platform you're on.)

Nothing happens. I try tsc (the TypeScript compiler) from the command line but it's not found. Maybe it's just not on my path? I figure out how to see the output window within Code. It's not finding tsc either. (Although it hadn't shown any sign of error until I opened the output window.)

Do I have tsc installed? I'm not sure. Doesn't Code come with TypeScript support? Yes, but does that "support" include the compiler? I find that exact question on StackExchange but it's old and there seems to be some debate over the answer.

I try reinstalling Code. It adds to my path but still doesn't find tsc. (The ever growing path is another pet peeve. And yes, the older I get the longer my list of pet peeves is.)

What the heck was I using last time I worked on this? I'm pretty sure I never deliberately uninstalled TypeScript. I search my computer but all I find are some old versions of TypeScript from Visual Studio (not Code).

I get the bright idea to check my work computer (I'm on my home machine.) I don't find anything there either. Then I remember it's a brand new computer that I set up from scratch. So much for that idea.

I give up and decide to install TypeScript. There are two methods given, via npm (Node.js) or Visual Studio. I'm pretty sure I used Node.js before so that seems to be the way to go.

I try to run npm but it's not found either. Did I lose Node.js as well as TypeScript? No, I find it on disk. I see there's a nodevars.bat file that sets the path. I try that and now I have npm. I bask in the warm glow of accomplishment for a few seconds until I remind myself that running npm was not what I was trying to accomplish.

I run npm install -g typescript. And now I have tsc. But my path didn't change. Maybe I had tsc all along and just needed node on my path? Hard to tell now.

Then I find my tsconfig.js is wrong - it lists .ts files that don't exist because they are just .js files. Again, how did that work last time around? I fix that and it seems to compile.

And finally, miraculously (but several hours later) my suneido.js playground works!

Wait a minute, the pull request was improvements to the CodeMirror Suneido mode, it had nothing to do with the playground. No problem, I'll just start over ...

Sunday, February 07, 2016

New PC Software

Once I had my new PC running, the next step was to install software. I prefer to set up new computers from scratch rather than use any kind of migration tool because I don't want to bring over a lot of old junk. It takes a little more time but I think it's worth it. And it's always interesting to review what software I'm actually using day to day, and how that changes over time.

My first install is usually Chrome. Once I sign in to my Google account and sync my browser settings I have all my bookmarks and my Gmail ready to go. I use LastPass for my passwords. I also install Firefox but it's not my day to day browser. I used to also install Safari for Windows but Apple is no longer keeping it updated.

Next is Dropbox which brings down a bunch of my commonly used files. Then Evernote which gives me access to my notes.

After that it's mostly development tools - Visual Studio Community (free) C++, MinGW C++, Java JDK, and Eclipse. The last few Eclipse installs I've been importing my addons from the previous install. But for some reason I couldn't get that to work this time because I couldn't find the right directory to import from. The directories are different since I've been using the Oomph installer. I'm sure I could figure it out, but I only use three addons so it was easier just to install them from the Eclipse Marketplace. (The three are Infinitest, EclEmma coverage, and Bytecode Outline.)

I use GitHub Desktop although both Visual Studio and Eclipse provide their own Git support. (and there's also the command line, of course.)

Although I'm not actively programming in Go these days I like to have it and LiteIDE installed.

For editors I use Scite, because it's the same editing component as we use in Suneido, and Visual Studio Code for JavaScript and Typescript. (Visual Studio Code is not related to Visual Studio. It's a cross platform application written in Typescript and packed with Electron.)

This is all on Windows, but other than the C++ tools I have pretty much exactly the same set of software on my Mac.

Thursday, February 04, 2016

New PC


In many ways PC performance has leveled off. There are minor improvements in CPU and memory speed but nothing big. But there has been a significant improvement in performance with the Skylake platform supporting SSD connected via PCI Express.

Based on a little research, here were my goals:
  • fast SkyLake CPU
  • 32gb DDR4 ram
  • 512 gb M2 SSD
  • small case (no external cards or drives)
  • 4K IPS monitor
And here's what I ended up getting:
  • Asus Z170I Pro Gaming mini ITX
  • Intel Core I7-6700K 4.00 GHz  
  • Corsair Vengeance LPX 32GB (2x16GB) DDR4 DRAM 2666MHz (PC4-21300)
  • Samsung 950 PRO - Series 512GB PCIe NVMe - M.2 Internal SSD
  • Fractal Design 202 case
  • ASUS PB279Q 27" 4K/ UHD 3840x2160 IPS
I don't do any gaming, but that was the only mini ITX motherboard I could find that fit my requirements.

The case was the smallest I could find that would fit this stuff. The frustrating part is that it could be half the size. The empty half of the case is for cards or external drives, but I didn't want either of those. It's a little hard to tell from the picture, but the motherboard is only 7" square, not much bigger than the fan. The power supply is almost as big as the motherboard.

And here's a comparison of my new SSD (top) to the old one (bottom). Higher numbers are better.



The big advantage of SSD over hard disks is the seek time for random IO. But the biggest gains here were for sequential IO. Still, some respectable improvements across the board. Of course, how that translates into actual overall performance is another question.

I try not to buy new machines too often. At least I know my old one, which is still working fine, will be put to good use by someone else in the office.

I'm not a hardware expert, here's where I got some advice:
Building a PC, Part VIII
Our Brave New World of 4K Displays

Thursday, January 21, 2016

Mac Remote Desktop Resolution

At home I have a Retina iMac with a 27" 5120 x 2880 display. At work I have a Windows machine with a 27" 2560 x 1440 display set at 125% DPI. I use Microsoft Remote Desktop (available through the Apple app store) to access my work machine from home.

I'm not sure how it was working before, but after I upgraded my work machine to Windows 10, everything got smaller. It comes up looking like 100% (although that's actually 200% on the Mac).

Annoyingly, when you are connected through RDP you aren't allowed to change the DPI.

I looked at the settings on my RDP connection but the highest resolution I could choose (other than "native") was 1920 x 1080 which was close, but a little too big.

Poking around, I found that in the Preferences of Microsoft Remote Desktop you can add your own resolutions (by clicking on the '+' at the bottom left). I added 2048 x 1152 (2560 x 1440 / 1.25)


Then changed the settings on the connection to use that and it's now back to my usual size.


The screen quality with RDP doing the scaling does not seem as good as when Windows is doing the scaling, but at least the size is the same.

I'm guessing from what I saw with searching the web that there might be a way to adjust this on the Terminal Server on my Windows machine, but I didn't find any simple instructions.

If anyone knows a better way to handle this, let me know.

Sunday, January 17, 2016

Native UX or Conway's Law?

organizations which design systems ... are constrained to produce designs
which are copies of the communication structures of these organizations
— M. Conway
A pet peeve of mine is applications that are available on different platforms, but are substantially different on each platform. I realize that I'm probably unusual in working roughly equally on Windows and Mac. But even if most people work on a single platform, why make the application unnecessarily different? Isn't that just more work for support and documentation.

I recently listened to a podcast where an employee mentioned that her focus was on making sure their Windows and Mac versions weren't "just" identical, but matched the native user experience. That sounds like an admirable goal. The problem is, their versions have differences that are nothing to do with native UX.

Obviously standards vary between platforms. Do you call it Settings or Preferences? And it makes sense to match the native conventions. But in general those are minor things.

But changing the layout of the screen? Or the way things are displayed? Or the text on buttons? There really aren't any "standards" for those things. And therefore "native" doesn't justify making them different.

I suspect that "matching the native UX" is often a justification for Conway's law. Different teams are developing the different versions and they do things different ways. Coordinating to make them consistent would take more effort so it doesn't happen, or not completely.

My company contracted out the development of our mobile app. The company we contract to believes in "native" (i.e. not portable) apps. They also have a process where different teams develop different versions, so they can be platform experts. The end result - our app is unnecessarily different on different platforms. And any change takes twice as much work. Not to mention having to fix bugs in multiple implementations.

It's sad that after all these years we still don't have a good story for developing portable applications.

Except that's not quite true. We have the web and browsers. And tools like Cordova and Electron that leverage that. And contrary to belief, users don't seem to have a lot of problems with using web apps that don't have native UX.

Saturday, January 02, 2016

Suneido RackServer Middleware

A few years ago I wrote an HTTP server for Suneido based on Ruby Rack and Python WSGI designs. We're using it for all our new web servers although we're still using the old HttpServer in some places.

One of the advantages of Rack style servers is that they make it easy to compose your web server using independently written middleware. Unfortunately, when I wrote RackServer I didn't actually write any middleware and my programmers, being unfamiliar with Rack, wrote their RackServers without any middleware. As the saying goes, small pieces, loosely joined. The Rack code we had was in small pieces, but it wasn't loosely joined. The pieces called each other directly, making it difficult, if not impossible, to reuse separately.

Another advantage of the Rack design is that both the apps and the middleware are easy to test since they just take an environment and return a result. They don't directly do any socket connections.

To review, a Rack app is a function (or in Suneido, anything callable e.g. a class with a CallClass or instance with a Call) that takes an environment object and returns an object with three members - the response code, extra response headers, and the body of the response. (If the result is just a string, the response code is assumed to be 200.)

So a simple RackServer would be:
app = function () { return "hello world" }
RackServer(app: app)
Middleware is similar to an app, but it also needs to know the app (or middleware) that it is "wrapping". Rack passes that as an argument to the constructor.

The simplest "do nothing" middleware is:
mw = class
    {
    New(.app)
        { }
    Call(env)
        {
        return (.app)(env: env)
        }
    }
(Where .app is a shortcut for accepting an app argument and then storing it in an instance variable as with .app = app)

Notice we name the env argument (as RackServer does) so simple apps (like the above hello world) are not required to accept it.

and you could use it with the previous app via:
wrapped_app = mw(app)
RackServer(app: wrapped_app)
A middleware component can do things before or after the app that it wraps. It can alter the request environment before passing it to the app, or it can alter the response returned by the app. It also has the option of handling the request itself and not calling the app that it wraps at all.

Uses of middleware include debugging, logging, caching, authentication, setting default content length or content type, handle HEAD using GET, and more. A default Rails app uses about 20 Rack middleware components.

It's common to use a number of middleware components chained together. Wrapping them manually is a little awkward:
wrapped_app = mw1(mw2(mw3(mw4(app))))
so I added a "with" argument to RackServer so you could pass a list of middleware:
RackServer(:app, with: [mw1, mw2, mw3, mw4])
(where :app is shorthand for app: app)

Inside Rackserver it simply does:
app = Compose(@with)(app)
using the compose function I wrote recently. In this case what we are composing are the constructor functions. (In Suneido calling a class defaults to constructing an instance.)

A router is another kind of middleware. Most middleware is "chained" one to the next whereas routing looks at the request and calls one of a number of apps. You could do this by chaining but it's cleaner and more efficient to do it in one step.

So far I've only written a few simple middleware components:
  • RackLog - logs the request along with the duration of the processing
  • RackDebug - Print's any exceptions along with a stack trace
  • RackContentType - Supplies a default content type based on path extensions using MimeTypes
  • RackRouter - a simple router that matches the path agains regular expressions to determine which app to call
But with those examples it should be easy for people to see how to compose their web server from middleware.

Thursday, December 31, 2015

Suneido Functional Compose

I started on one project, found it required another, which then led to code like:

a(b(c(d(...)))

which seemed ugly. I knew that functional programming has a higher order "compose" to handle this more cleanly. (Higher order because it takes functions as arguments and returns another function.)

Suneido isn't specifically a functional language, but it has a bunch of functional methods and functions. I hadn't written Compose yet but it turned out to be quite easy.

function (@fns)
    {
    return {|@args|
        result = (fns[0])(@args)
        for fn in fns[1..]
            result = fn(result)
        result
        }
    }

If you're not familiar with Suneido code, the '@' is used to accept multiple arguments and to expand them again. The {|...| ... } notation is Suneido's way of writing a "block", i.e. a lambda or closure. (Suneido's use of the term "block" and the syntax come from Smalltalk, one of its roots way back when, pre 2000.)  Because, in Suneido, blocks are used for control flow, "return" returns from the containing function, so to return a result from a block we take advantage of how Suneido by default returns the value of the last statement.

Although in some languages the arguments to compose are listed "outside-in", I chose to have them "inside-out", so Compose(a,b,c) returns a function that does c(b(a(...))) in the same way you would write a *nix pipeline a | b | c i.e. in the order they will be applied.

I documented it, like a good programmer, except when I went to add the "see also" section I found a lot of the other functional stuff had never been documented. So I spent a bunch more time documenting some of the that. In the process I almost got sucked into improving the code, but I resisted pushing yet another task onto the stack!