Thursday, December 01, 2016

Mobile Git

I've used the Textastic source code editor iOS app on my phone and tablet for quite a while. I don't do a lot of heavy editing with it, but it's handy when I'm thinking about code and want to look at how something is implemented. (Textastic is also available on the Mac, but I've never tried that version. I have plenty of editors there, and I prefer ones that are also available on Windows.)

One of the limitations was that it wouldn't pull directly from Git. It does pull from Dropbox though, so I ended up keeping a copy of my source code there. But that had a few drawbacks. First, I had to remember to update the Dropbox copy of the code. Second to update the Textastic version I had to download the entire source, which could be slow depending on the connection.

Recently I discovered the Working Copy Git iOS app which lets you clone Git repos to your phone or tablet. You can even commit changes, although I generally wouldn't want to do that when I couldn't test the changes. It would be ok for documentation. It has its own editor, but even better it uses new iOS features to let you access the files from other editors, like Textastic.

Now, not only do I have easily updated offline access to my source code, I have offline access to my Git history.

One minor disappointment was that neither of these apps has syntax highlighting for TypeScript. I was a bit surprised because Typescript seems as popular as some of the other languages they include. Textastic does support Textmate bundles so in theory you could add it that way, but it didn't sound easy so I haven't bothered yet. It would be easier (for me!) if they just included it out of the box.

Unfortunately for Android users, both of these apps appear to be iOS only. If you know of good Android alternatives, let us know in the comments.

Monday, September 19, 2016

Progress on suneido.js

A project like suneido.js goes through distinct stages.

The first stage is just thinking and researching the platform. (In this case JavaScript, ES6/2015, TypeScript, node.js etc.) I've been doing that in the background for quite a while.

But eventually you have to start writing some actual code, and that's what I've been doing lately. This can be a frustrating stage because it's three steps forward, two steps back (and that’s on a good day). You pick an area to implement, decide on an approach, and write some code. That often goes smoothly. But then you start on the next area and you realize that the approach you picked for the first area won't work for the next. So you come up with an approach that works for both areas and go back and rewrite the existing code. Rinse and repeat.

Progress seems painfully slow during this stage. Important advances are being made, it’s just that it’s in terms of design decisions, not lines of code. (On the positive side, the Suneido language is, in many ways, quite similar to JavaScript, so the impedance mismatch is much lower than with C++ or Java. Meaning I can map many features quite directly from one to the other.)

There's also learning involved since I'm on a new platform, with a new language and tools. I went through the same thing with jSuneido since I'd never worked in Java before I started that project. It's one thing to read books about languages and platforms. It's a whole 'nother story using them to implement a large project. (I have to say, ES6/2015 features have been a big plus on this project, as has TypeScript.)

Eventually the approach firms up and all of a sudden you can make rapid progress in the code. It almost becomes mechanical, simply translating from jSuneido or cSuneido. Of course, issues still arise as you encounter different corners and edge cases. But mostly they are easily addressed.

It reminds me a bit of search strategies like simulated annealing where you start by making large changes all over, and as you get closer to a solution, you “cool down” and the changes are smaller and smaller as you approach a solution. Of course, it doesn’t mean you’ve found the “best” solution. But hopefully the initial larger explorative jumps covered enough of the search space that you’ve found something reasonably close to optimal.

I’m always surprised by just how many dead ends there are. Of course, when you’re programming the possibilities are infinite, so a lot of them have to be wrong. On the other hand, there is supposed to be intelligent design at work here, which you'd think would avoid so many dead ends. But many things are hard to evaluate before you actually try them. (Which brings to mind Wolfram’s idea of computational irreducibility.)

The reward for this plodding is that seemingly all of a sudden, I have a system that can actually handle real problems, not just limited toy examples. There is still lots to do, but suneido.js can now successfully run some of Suneido's standard library tests. It takes a surprising amount of infrastructure to reach this stage. Even “simple” tests tend to use a lot of language features.This seems like a big leap forward, but I know from implementing jSuneido that it’s really just another small step in a long process.

It's a bit hard to follow the project I'm afraid. The TypeScript code is on GitHub, but the actual transpiler is written in Suneido code and lives on our private version control system. And running the system requires jSuneido (which is used to parse Suneido code to an AST) and cSuneido (for the IDE), plus node.js and Typescript and other bits and pieces. I really should try to simplify or at least package it so it's easier for other people to access.

I'm glad to have reached this (positive) point, since I’m heading off travelling again, and probably won’t get too much done on this for a while.

PS. Hearing about some paragliding deaths recently I thought (not for the first time), "what if I get killed paragliding?". And my first thought was, "but who would finish suneido.js?". Ah, the mind of a true geek :-)

Friday, September 02, 2016

Old School

As I progress on suneido.js I'm accumulating a number of miscellaneous commands I need to run regularly. So far I've avoided any kind of build tool. I started writing shell scripts (OS X) and batch files (Windows) which worked but is ugly. And led to git problems with executable permissions.

So I started looking at build tools. The big one these days seems to be Gulp but so far I don't need its fancy streaming and plugins. So I searched for something simpler, and came across a comment that said something to the effect "if you're from those days, you could use make". They likely meant it in a derogatory way, but it made sense to me. I already have make on Windows and OS X and know how to use it. And it's dead simple to use it to simply run a few commands.

So now I can run "make bundle" or "make test". It's perfect for what I need. The hip kids can shake their heads all they want.

Thursday, August 25, 2016

Modules Again

Things had been running smoothly in suneido.js with the UMD approach. Until today when I went to run my code in the browser (I hadn’t done this for a little while because I’d been working on transpiling) and I got errors saying one of my modules couldn’t be found. I realized that I had started using a Node.js module in my code, which of course couldn’t be found in the browser. Crap!

I searched the web and most recommendations were to use browserify. It sounded straightforward, and it made sense to combine all my JavaScript into one file. So I installed browserify and switched my tsconfig.json back to commonjs.

But it wouldn’t work. It kept telling me it couldn’t find one of my modules. But as far as I could tell there was nothing different about that particular module or where it was imported.

I finally remembered that the flashy dependency graph feature of Alm had shown I had a circular dependency. Was that a problem with browserify? Searching the web didn’t find anything definitive, but there were some references to it being problematic.

It wasn’t hard to fix the dependency (and I’d been meaning to do it eventually anyway).

But it still didn’t work! Then I realized that the code shown in the Chrome developer tools debugger wasn’t my fixed version. Caching? A little more digging and I found you can right click on the refresh button and pick “Hard Refresh and Reset Caches”. Finally it worked! (Although I wondered if any of my other approaches would have worked if I’d done this???)

It seems like a reasonable solution, other than taking a dependency on yet another tool and adding another build step. (I do have to admit that npm makes it easy to install these tools.)

Friday, August 05, 2016

Alm TypeScript Editor


When I was reading TypeScript Deep Dive (recommended) I noticed a mention of an "alm" TypeScript editor. I'd never heard of it so I figured I'd better check it out.

The developer turned out to be the same as the author of the book - Basarat Ali Syed aka "bas". Who also turned out to be the the original developer of the atom-typescript plugin that I've been using.

Alm is a new project but it's moving quickly and is already full featured. And there's actually documentation :-) It has features like go to definition, find references, and even rename refactor. It also has some flashier features like dependency graphs and AST views. It has an outline side bar (alm calls it a Semantic View) which is a feature I really like in Eclipse and miss in Visual Studio. (We also have it in Suneido.)

The current "packaging" is a little quirky - you start it from a command prompt and the actually UI runs in Chrome. It would be nice to see it packaged with Electron, like Atom, but that's not critical.

You can use Chrome's Save To Desktop to open it as a "bare" window without the browser chrome, but you still have to start Alm first. No doubt there's a way to automate that. Or you can use "alm -o" which will open it in a tab in Chrome, and then use something like the Chrome extension Open as Popup.

I was interested to see that it was using the CodeMirror JavaScript code editor component which is what we have been using in the suneido.js project. But recently it changed to use the Monaco editor which was written for Visual Studio Code and recently released as a separate open source component. That makes sense because Monaco is written in TypeScript, and TypeScript was the original target language for Visual Studio Code.

Alm leverages the actual TypeScript compiler for source code analysis, which seems like the ideal approach.

I've only used it for a few days and I'm still learning my way around, but it looks like it will be my preferred editor for TypeScript.

Thursday, August 04, 2016

TypeScript Modules

TypeScript will translate ES6 (ES2015) modules into a variety of formats e.g. CommonJS for Node.js or AMD for require.js in browsers.

I found it a pain to have to switch my tsconfig.json back and forth to run my tests in Node.js or to run the actual code in the browser. Inevitably I'd forget to switch and/or recompile and it wouldn't work. I had to check one or the other version of tsconfig.json into Git, but that would confuse anyone who downloaded and tried to run the code.

I could probably set up a build process to generate both versions but I never got around to figuring that out.

I knew some code was written in a way that would work with both CommonJS and AMD and wondered why TypeScript didn't generate that format. Guess what, it does :-) TypeScript calls this output "UMD" or "isomorphic". (See TypeScript Modules under Code Generation)

    "compilerOptions": {
        "module": "umd",

The generated code is slightly larger since it basically contains both versions, but that's not a big deal.

Eventually (I hope) software will support ES6/2015 modules natively and you won't have to translate at all.

For backgrounds see What is AMD, CommonJS, and UMD?

Wednesday, August 03, 2016

TypeScript const enums

In my previous post I mentioned that if you make an enum const then you don't get the run time bidirectional mapping between numbers and names. It turns out that's not quite true.

If you turn on the preserveConstEnums compiler option (set it to true) then TypeScript still emits the mapping object, even though it still in-lines the enum values.

That seems like the best of both worlds. But I couldn't get it to work. If you try to use the mapping, you get:
import { Token } from "./tokens"
...
Tokens[token]

=> A const enum member can only be accessed using a string literal.
I searched online and found some suggestions to type assert to "any", but it didn't work.
import { Token } from "./tokens"
...
(Tokens as any)[token]

=> 'const' enums can only be used in property or index access expressions or the right hand side of an import declaration or export assignment.
So I gave up and didn't use const.

Then I came across a slightly different type assertion that worked!
import * as tokens from "./tokens"
...
(tokens as any).Tokens[token]
As the saying goes: "All problems in computer science can be solved by another level of indirection"

Note: As is recommended, I'm using the newer "as" form of type assertions instead of the older <type> form.

Thursday, July 14, 2016

Lexer in TypeScript

One of the standard things I do when I'm investigating a new language is to implement Suneido's lexical scanner. Not because this is necessarily the best test of a language. It's more that I'm usually evaluating the language in terms of how it would work to implement Suneido. I realize that's a very biased view :-)

I wasn't planning to do this with TypeScript / JavaScript because suneido.js isn't intended to be an "implementation" of Suneido. It's a transpiler that runs on jSuneido (using the jSuneido lexer and parser) that translates Suneido code to JavaScript. The only manually written TypeScript / JavaScript code is the runtime support library. (And that's partly a bootstrapping process. Once the transpiler is functional much of the support library could be written in Suneido code.)

But as I implement the runtime support library, obviously I need tests. And I'm getting tired of writing the same tests for each version of Suneido. (Or worse, different tests for each version.) That's why I started developing "portable tests" some time ago. So I decided I should implement the portable test framework in TypeScript. At which point I remembered that this required a Suneido lexical scanner.

So I decided to write yet another version. I took about a day, which is about what I expected. And some of that time is still learning TypeScript and JavaScript. I based it mostly on the C# version. The Go version is more recent but Go has more language differences.

Overall, I was pretty happy with how it went. I find the repetition of "this." a little ugly. In Suneido you can abbreviate this.foo as just .foo, which is a lot more compact and less "noise" in the code, while still making it clear it's an instance variable.

I find JavaScript's equality testing somewhat baroque, the recommendation is to use === most of the time, but that won't always work for strings (because of primitive strings and object strings). And then there's the quirks of -0 versus +0, and NaN.

ES6 arrow functions and the functional methods like map are nice. And spread / rest (...) and for-of are good. I only specified types on function parameters and returns, but this worked well and gave me very little grief.

I was pleasantly surprised to find that TypeScript enum's automatically create a reverse mapping from name to number. Although the gotcha is that if you make them const you don't get this because they are inlined. Using a simple numeric enum doesn't let me attach extra information for parsing, but I'm not planning to write a parser so that's not a problem. The other issue is debugging, where all you see is a number.

Removing const from my token enum exposed another issue. I was relying on Atom's auto-compile but it only compiles the file you are editing, not the other files that may depend on it. So I went back to running tsc -w in a separate terminal window. (Open question - does anyone know why tsc lists some of my files as relative paths and some as absolute? It's consistent about which files, except that gradually more are switching to absolute. It's not causing any problems, I just can't figure out why.)

Although it can mean shorter code, I'm not a fan of "truthy" and "falsey" values. It was a source of problems 30 years ago in C. Now I got caught by it with a function that returned a number or undefined. I expected undefined to be "falsey" but I forgot that zero is also "falsey". With Suneido I took a stricter approach that things like if and while only accept true or false and throw an exception for any other type of value.

Now that I have a lexical scanner I can implement the portable test framework, which also shouldn't take me too long.

Versions, in order of age. cSuneido and jSuneido are production code and more full featured than the other more experimental versions.

Wednesday, July 13, 2016

Going in Circles

Suneido has a bunch of built-in functions equivalent to its operators. e.g. Add for '+'  These are useful, for example, in functional style programming e.g. list.Reduce(Add)

I noticed that we didn't have a compare function i.e. that would return -1 for less than, 0 for equal, or +1 for greater than. This can be useful in some cases (e.g. in a switch) and avoids needing two comparisons. (Note: I didn't have a need for it, just noticed we didn't have it.)

A few days later I decided to implement it. It was simplest in jSuneido since there was already an internal compare function that I just needed to expose. Whereas cSuneido has the C++ / STL style of using operator < and operator == to implement all the other comparisons. So it needed two comparisons.

Then I started to question why these functions needed to be built-in. Why not just write them in Suneido code in the standard library. The difference in performance would be minor. So I ripped them out of cSuneido and jSuneido and defined them in the standard library. I also updated the documentation.

Then I started to wonder how often these built-in functions are actually used. As you can probably guess, almost never. On top of that I found several places where programmers got confused and thought they needed to use these functions instead of just the operators. (I fixed these.)

So I ended up ripping out the standard library functions I had just written, and the documentation I'd just updated.

In the few places where the functions were used it was easy to replace e.g. Add with function (x, y) { x + y }

I ended up keeping a definition for greater-than since it was used in several places to sort in reverse order. (since list.Sort! takes an optional comparison function defaulting to less-than)

So I started with adding a function, and I ended with deleting a bunch instead.

The moral of the story is to add features when you need them. (YAGNI) That certainly applied to adding the compare function, but it also applies to creating all the other ones in the first place.

And when I ported them all to jSuneido it never occurred to me to question whether they should just be removed.

Another factor is "sunk costs". After spending the time to implement the functions, first in cSuneido and jSuneido and then in the standard library, it felt bad to just delete that work. But you shouldn't let that affect decision making.

Part of my thinking was along the lines of, if we have A, B, and D, then we should have C. But that only makes sense if you had a real need for A, B, and D.

Finally, I think the reason I started this whole process was that it was psychologically attractive to do something that I could get a quick and easy sense of accomplishment from. (As opposed to large projects where it's hard to even see progress.) There's nothing wrong with that, but the choice of what to do should still be rational. i.e. pick an easy task from the ones that make sense to do.

Apart from feeling stupid both for the past and present wasted effort, in the end I'm happy to remove some code and move (if only slightly) towards simpler.

Sunday, July 10, 2016

Programming in Typescript

I've been back working on suneido.js recently, which means programming in JavaScript or, my preference, TypeScript. It's been awhile so I had to get back up to speed on what tools to use. In case it's useful to anyone, here's what I've settled on (for now).
  • install Node.js (current 6.3 rather than stable since I'm not in production)
  • use npm to install TypeScript (currently at 1.8)
I started used Visual Studio Code, which I quite like, but a few things bugged me so I gave Atom a try and found I liked the editing experience better, especially after installing a few extra packages. One of the things I like about Atom is that it compiles Typescript automatically when you save a file, whereas (AFAIK) with Code you need to run tsc -w in a separate terminal window.

Atom packages:
  • atom-typescript
  • atom-beautify
  • highlight-line
  • highlight-selected
  • simple-drag-drop-text
  • Sublime-Style-Column-Select
The last three should be part of Atom core if you ask me, but as long as they're available that's fine.

Atom setting:
  • tree-view - hide VCS ignored files
  • autosave enabled
  • show invisibles
  • tab size: 4 (personal preference)
One thing I am still using Code for is debugging. Node's stack traces show the JavaScript line numbers which is a pain to relate back to the TypeScript code. I found some ways around this, but none of them looked simple. Code's debugger works well.

Previously I had also used WebStorm from JetBrains which has support for TypeScript and debugging. I may use it again, although I like the ease of using simple tools like Atom and Code.

Interestingly, Visual Studio Code is built on top of Electron, which was originally developed for Atom. And recently, Monaco, the editor component of Visual Studio Code has been open sourced.

I'm targeting ES6/2015 partly as future-proofing, and partly just because it's a better language. Most of ES6 is pretty well supported in Node and browsers, and there are tools like Babel to run it under ES5. Exploring ES6 is a useful resource. Strangely, none of the browsers support ES6 modules natively. You can still use them in the Typescript code but they need to be compiled to AMD for the browser, and CommonJS for Node (e.g. testing).

Although JavaScript has some quirks that I don't care for, overall I quite like it. And with Typescript for type checking, I don't mind programming in it at all. One thing I like about Typescript, as opposed to things like CoffeeScript is that it doesn't obscure the JavaScript.

Thankfully getting going again has gone smoother this time than my last bout of Yak Shaving

You can follow progress on suneido.js on GitHub. There's still a long way to go!

Tuesday, March 29, 2016

Yak Shaving

"Yak shaving" is a programmer's slang term for the distance between a task's start and completion and the tangential tasks between you and the solution. If you ever wanted to mail a letter, but couldn't find a stamp, and had to drive your car to get the stamp, but also needed to refill the tank with gas, which then let you get to the post office where you could buy a stamp to mail your letter—then you've done some yak shaving. - Zed Shaw
Yak shaving is one of my least favorite and most frustrating parts of software development. I have vague memories of enjoying the chase when I was younger, but my tolerance for it seems to decrease the older I get. Nowadays it just pisses me off.

I haven't worked on suneido.js for a while but I got a pull request from another programmer that's been helping me with it. I merged the pull request without any problems. Then I went to try out the changes.

I get a cryptic JavaScript error. Eventually I figure out this is a module problem. I see that my tcconfig.js is set to commonjs, which is what is needed to run the tests in node.js, but to run in the browser I need amd. I change it. I love JavaScript modules.

Now I need to recompile my TypeScript. I happen to be using Visual Studio Code, which follows the latest trend in editors to not have any menu options that are discoverable. Instead you need to type commands. I eventually find the keyboard shortcut Ctrl+Shift+B. (After some confusion because I was reading the web site on OS X and it auto-magically only shows you the keyboard shortcuts for the platform you're on.)

Nothing happens. I try tsc (the TypeScript compiler) from the command line but it's not found. Maybe it's just not on my path? I figure out how to see the output window within Code. It's not finding tsc either. (Although it hadn't shown any sign of error until I opened the output window.)

Do I have tsc installed? I'm not sure. Doesn't Code come with TypeScript support? Yes, but does that "support" include the compiler? I find that exact question on StackExchange but it's old and there seems to be some debate over the answer.

I try reinstalling Code. It adds to my path but still doesn't find tsc. (The ever growing path is another pet peeve. And yes, the older I get the longer my list of pet peeves is.)

What the heck was I using last time I worked on this? I'm pretty sure I never deliberately uninstalled TypeScript. I search my computer but all I find are some old versions of TypeScript from Visual Studio (not Code).

I get the bright idea to check my work computer (I'm on my home machine.) I don't find anything there either. Then I remember it's a brand new computer that I set up from scratch. So much for that idea.

I give up and decide to install TypeScript. There are two methods given, via npm (Node.js) or Visual Studio. I'm pretty sure I used Node.js before so that seems to be the way to go.

I try to run npm but it's not found either. Did I lose Node.js as well as TypeScript? No, I find it on disk. I see there's a nodevars.bat file that sets the path. I try that and now I have npm. I bask in the warm glow of accomplishment for a few seconds until I remind myself that running npm was not what I was trying to accomplish.

I run npm install -g typescript. And now I have tsc. But my path didn't change. Maybe I had tsc all along and just needed node on my path? Hard to tell now.

Then I find my tsconfig.js is wrong - it lists .ts files that don't exist because they are just .js files. Again, how did that work last time around? I fix that and it seems to compile.

And finally, miraculously (but several hours later) my suneido.js playground works!

Wait a minute, the pull request was improvements to the CodeMirror Suneido mode, it had nothing to do with the playground. No problem, I'll just start over ...

Sunday, February 07, 2016

New PC Software

Once I had my new PC running, the next step was to install software. I prefer to set up new computers from scratch rather than use any kind of migration tool because I don't want to bring over a lot of old junk. It takes a little more time but I think it's worth it. And it's always interesting to review what software I'm actually using day to day, and how that changes over time.

My first install is usually Chrome. Once I sign in to my Google account and sync my browser settings I have all my bookmarks and my Gmail ready to go. I use LastPass for my passwords. I also install Firefox but it's not my day to day browser. I used to also install Safari for Windows but Apple is no longer keeping it updated.

Next is Dropbox which brings down a bunch of my commonly used files. Then Evernote which gives me access to my notes.

After that it's mostly development tools - Visual Studio Community (free) C++, MinGW C++, Java JDK, and Eclipse. The last few Eclipse installs I've been importing my addons from the previous install. But for some reason I couldn't get that to work this time because I couldn't find the right directory to import from. The directories are different since I've been using the Oomph installer. I'm sure I could figure it out, but I only use three addons so it was easier just to install them from the Eclipse Marketplace. (The three are Infinitest, EclEmma coverage, and Bytecode Outline.)

I use GitHub Desktop although both Visual Studio and Eclipse provide their own Git support. (and there's also the command line, of course.)

Although I'm not actively programming in Go these days I like to have it and LiteIDE installed.

For editors I use Scite, because it's the same editing component as we use in Suneido, and Visual Studio Code for JavaScript and Typescript. (Visual Studio Code is not related to Visual Studio. It's a cross platform application written in Typescript and packed with Electron.)

This is all on Windows, but other than the C++ tools I have pretty much exactly the same set of software on my Mac.

Thursday, February 04, 2016

New PC


In many ways PC performance has leveled off. There are minor improvements in CPU and memory speed but nothing big. But there has been a significant improvement in performance with the Skylake platform supporting SSD connected via PCI Express.

Based on a little research, here were my goals:
  • fast SkyLake CPU
  • 32gb DDR4 ram
  • 512 gb M2 SSD
  • small case (no external cards or drives)
  • 4K IPS monitor
And here's what I ended up getting:
  • Asus Z170I Pro Gaming mini ITX
  • Intel Core I7-6700K 4.00 GHz  
  • Corsair Vengeance LPX 32GB (2x16GB) DDR4 DRAM 2666MHz (PC4-21300)
  • Samsung 950 PRO - Series 512GB PCIe NVMe - M.2 Internal SSD
  • Fractal Design 202 case
  • ASUS PB279Q 27" 4K/ UHD 3840x2160 IPS
I don't do any gaming, but that was the only mini ITX motherboard I could find that fit my requirements.

The case was the smallest I could find that would fit this stuff. The frustrating part is that it could be half the size. The empty half of the case is for cards or external drives, but I didn't want either of those. It's a little hard to tell from the picture, but the motherboard is only 7" square, not much bigger than the fan. The power supply is almost as big as the motherboard.

And here's a comparison of my new SSD (top) to the old one (bottom). Higher numbers are better.



The big advantage of SSD over hard disks is the seek time for random IO. But the biggest gains here were for sequential IO. Still, some respectable improvements across the board. Of course, how that translates into actual overall performance is another question.

I try not to buy new machines too often. At least I know my old one, which is still working fine, will be put to good use by someone else in the office.

I'm not a hardware expert, here's where I got some advice:
Building a PC, Part VIII
Our Brave New World of 4K Displays

Thursday, January 21, 2016

Mac Remote Desktop Resolution

At home I have a Retina iMac with a 27" 5120 x 2880 display. At work I have a Windows machine with a 27" 2560 x 1440 display set at 125% DPI. I use Microsoft Remote Desktop (available through the Apple app store) to access my work machine from home.

I'm not sure how it was working before, but after I upgraded my work machine to Windows 10, everything got smaller. It comes up looking like 100% (although that's actually 200% on the Mac).

Annoyingly, when you are connected through RDP you aren't allowed to change the DPI.

I looked at the settings on my RDP connection but the highest resolution I could choose (other than "native") was 1920 x 1080 which was close, but a little too big.

Poking around, I found that in the Preferences of Microsoft Remote Desktop you can add your own resolutions (by clicking on the '+' at the bottom left). I added 2048 x 1152 (2560 x 1440 / 1.25)


Then changed the settings on the connection to use that and it's now back to my usual size.


The screen quality with RDP doing the scaling does not seem as good as when Windows is doing the scaling, but at least the size is the same.

I'm guessing from what I saw with searching the web that there might be a way to adjust this on the Terminal Server on my Windows machine, but I didn't find any simple instructions.

If anyone knows a better way to handle this, let me know.

Sunday, January 17, 2016

Native UX or Conway's Law?

organizations which design systems ... are constrained to produce designs
which are copies of the communication structures of these organizations
— M. Conway
A pet peeve of mine is applications that are available on different platforms, but are substantially different on each platform. I realize that I'm probably unusual in working roughly equally on Windows and Mac. But even if most people work on a single platform, why make the application unnecessarily different? Isn't that just more work for support and documentation.

I recently listened to a podcast where an employee mentioned that her focus was on making sure their Windows and Mac versions weren't "just" identical, but matched the native user experience. That sounds like an admirable goal. The problem is, their versions have differences that are nothing to do with native UX.

Obviously standards vary between platforms. Do you call it Settings or Preferences? And it makes sense to match the native conventions. But in general those are minor things.

But changing the layout of the screen? Or the way things are displayed? Or the text on buttons? There really aren't any "standards" for those things. And therefore "native" doesn't justify making them different.

I suspect that "matching the native UX" is often a justification for Conway's law. Different teams are developing the different versions and they do things different ways. Coordinating to make them consistent would take more effort so it doesn't happen, or not completely.

My company contracted out the development of our mobile app. The company we contract to believes in "native" (i.e. not portable) apps. They also have a process where different teams develop different versions, so they can be platform experts. The end result - our app is unnecessarily different on different platforms. And any change takes twice as much work. Not to mention having to fix bugs in multiple implementations.

It's sad that after all these years we still don't have a good story for developing portable applications.

Except that's not quite true. We have the web and browsers. And tools like Cordova and Electron that leverage that. And contrary to belief, users don't seem to have a lot of problems with using web apps that don't have native UX.

Saturday, January 02, 2016

Suneido RackServer Middleware

A few years ago I wrote an HTTP server for Suneido based on Ruby Rack and Python WSGI designs. We're using it for all our new web servers although we're still using the old HttpServer in some places.

One of the advantages of Rack style servers is that they make it easy to compose your web server using independently written middleware. Unfortunately, when I wrote RackServer I didn't actually write any middleware and my programmers, being unfamiliar with Rack, wrote their RackServers without any middleware. As the saying goes, small pieces, loosely joined. The Rack code we had was in small pieces, but it wasn't loosely joined. The pieces called each other directly, making it difficult, if not impossible, to reuse separately.

Another advantage of the Rack design is that both the apps and the middleware are easy to test since they just take an environment and return a result. They don't directly do any socket connections.

To review, a Rack app is a function (or in Suneido, anything callable e.g. a class with a CallClass or instance with a Call) that takes an environment object and returns an object with three members - the response code, extra response headers, and the body of the response. (If the result is just a string, the response code is assumed to be 200.)

So a simple RackServer would be:
app = function () { return "hello world" }
RackServer(app: app)
Middleware is similar to an app, but it also needs to know the app (or middleware) that it is "wrapping". Rack passes that as an argument to the constructor.

The simplest "do nothing" middleware is:
mw = class
    {
    New(.app)
        { }
    Call(env)
        {
        return (.app)(env: env)
        }
    }
(Where .app is a shortcut for accepting an app argument and then storing it in an instance variable as with .app = app)

Notice we name the env argument (as RackServer does) so simple apps (like the above hello world) are not required to accept it.

and you could use it with the previous app via:
wrapped_app = mw(app)
RackServer(app: wrapped_app)
A middleware component can do things before or after the app that it wraps. It can alter the request environment before passing it to the app, or it can alter the response returned by the app. It also has the option of handling the request itself and not calling the app that it wraps at all.

Uses of middleware include debugging, logging, caching, authentication, setting default content length or content type, handle HEAD using GET, and more. A default Rails app uses about 20 Rack middleware components.

It's common to use a number of middleware components chained together. Wrapping them manually is a little awkward:
wrapped_app = mw1(mw2(mw3(mw4(app))))
so I added a "with" argument to RackServer so you could pass a list of middleware:
RackServer(:app, with: [mw1, mw2, mw3, mw4])
(where :app is shorthand for app: app)

Inside Rackserver it simply does:
app = Compose(@with)(app)
using the compose function I wrote recently. In this case what we are composing are the constructor functions. (In Suneido calling a class defaults to constructing an instance.)

A router is another kind of middleware. Most middleware is "chained" one to the next whereas routing looks at the request and calls one of a number of apps. You could do this by chaining but it's cleaner and more efficient to do it in one step.

So far I've only written a few simple middleware components:
  • RackLog - logs the request along with the duration of the processing
  • RackDebug - Print's any exceptions along with a stack trace
  • RackContentType - Supplies a default content type based on path extensions using MimeTypes
  • RackRouter - a simple router that matches the path agains regular expressions to determine which app to call
But with those examples it should be easy for people to see how to compose their web server from middleware.