Alexa Skill Hackathon Takeaways

This past weekend we had a hackathon at work, focused on developing Skills for the (relatively) new Amazon Echo. The purpose of the hackathon was to expose us to a technology we hadn't used before and explore what use cases might exist that we could leverage either internally, or for customers. We had a great time and built some really fun Skills along the way to learning the Alexa Skill development tools. We found there were pros and cons, as with anything, and I personally took away a few key lessons from my short time working with the Echo.

I want to take a moment here to point out that I wasn't paid or encouraged by anyone, anyone at all, to share my experience on this blog. Amazon, to my knowledge, had no part in this hackathon; it was an internal event we did just for fun and exploration. I just want to solidify what I learned by revisiting it in a writeup. So let's talk about the Echo!

When you buy the Amazon Echo, you are buying a very nice piece of hardware that connects you to a service named Alexa, which is where the magic happens. You make your voice request, and then Alexa uses Skills (voice applications) to do something useful or fun in response to your request. First of all, I want to say that Alexa is a lot of fun to work with from a creative product standpoint. Working with Alexa's voice API provides a vast landscape of opportunities to be creative in ways that feel fresh and new. Alexa is also a lot of fun to use from the consumer standpoint, for me at least. I've heard many people say they don't know what they would do with an Echo if they had one, but to someone with that complaint, I'd say the cliche that there's an app for that holds true here.

Alexa has tons of capabilities out of the box, covering everything from checking the weather, to listening to podcasts, news, and music, to telling jokes, to making purchases on Amazon through your account. The system can integrate with home automation tools, and work in concert with apps on your phone to manage shopping lists, to-do lists, and other handy utility features. Additional Skills can be obtained from the community through the Skill Store to cover all kinds of additional fun and useful scenarios. Alexa has lots to offer.

In case this is reading too much like an advertisement, don't worry. There's plenty for Amazon to work out with the Echo before I would ever buy one at full price. That being the first of my concerns: I find it to be unjustifiably expensive. You're buying a speaker, with a hardcore microphone array, connected to the web via wi-fi; that's really all the hardware you get, and it comes in just shy of $200. It is a very nice enclosure, and the speaker itself is exemplary; but without internet, this thing is a brick. The Echo does no work of its own locally. All services are performed by Alexa on Amazon's side, and piped back to you over the web. I can go to OK Google, or Siri, or even Cortana for a lot of what Alexa can provide; so $180 feels brutal for the sole perk of having your voice assistant always-on. A big downside of the Echo, compared with voice services provided by Google, Apple, and even Microsoft, is that the Echo is immobile, whereas your phone is always with you. Adding to the cost, some of the handy features (music providers being a prominent one) require subscription fees; and the buy-by-voice feature, while convenient, can feel a little dangerous since Alexa doesn't recognize which voice is yours. (Buying each other socks without permission became a running gag throughout the hackathon.)

Along those lines, Alexa doesn't always understand voice input clearly, so you sometimes find yourself repeating commands to get her to hear you right. I find Google's voice recognition to be a tad more reliable. Alexa also does poorly if you try to give her a command in a room with other people speaking. She can't differentiate among voices, and this is very frustrating at times. Granted, this is an emerging technology, and natural language and voice recognition are very challenging problems for computers to handle, but from a practical, daily usage standpoint, this is often a frustrating shortcoming.

By far, though, my biggest gripe about working with Alexa is as a developer. Every technology has its idiosyncrasies, but I found learning the Alexa Skill developer tools to be a particularly harrowing experience. First of all, you need to use Amazon Lambda to host your Skills. Amazon Lambda is a cool idea, but developing Alexa Skills thereon is a new kind of challenge, and not in any particularly fulfilling kind of way. The tools for building Skills are a little clunky and the documentation is practically non-existent. They have some nice features, like the ability to test your Skills without needing to go through the time-consuming process of conversing with Alexa repeatedly. However, in order to deploy Alexa skills, you need to access two separate dashboards; the Skill itself, the Lambda logic, is deployed on AWS Lambda through the AWS console, but you have to separately log into the Amazon Developer Console to access Alexa configurations and tools to define things like user interactions, and recognized user phrases. Where is this documented? Great question.

The feedback from the system when something goes wrong is minimal. You'll get messages like "Syntax error." That's it. What kind of syntax error? Where? Your only option is to lint the code yourself and hope that any of your mistakes can be found that way, or be so good at programming that you never make typos or logic errors. There's not much available in terms of debugging help. By its nature, Amazon Lambda also faces the limitation of weak session and state management. This is one reason why some of us at the hackathon resorted to using Alexa as a forwarding service, simply to translate voice into API calls to a remote server (EC2 in this case) that hosted more complex service implementations outside of Amazon Lambda.

As mentioned earlier, the documentation for this set of developer tools is sorely lacking. The official Amazon guide points to a blog post from 4 months ago that is not only incomplete, but also already out of date, with screenshots and instructions that are both flat-out incorrect. It was a headache trial-and-erroring our way through the process of developing our first Alexa Skill and finding third-party tutorials for Skill development. We were able to pull it off, but it was a lot harder than it needed to be. It could have been an hour long process if the documentation had been half-way complete or at least up-to-date. I would expect sketchy documentation from open source projects made by volunteers; not from a top tech company, on a service they profit from, that other developers are meant to use as part of a business model. Maybe I expect too much, but fair or not, that is my expectation of a for-profit system provided by a sixty-five billion dollar tech company.

It really doesn't feel like Amazon is bringing their A-game to the Alexa developer community. If these issues cropped up during an open beta or in a staging environment, that would be one thing, but this is a public-facing, monetized platform. One or two of the features are sub-labeled as beta as of this writing, but the features work fine; the process and interface design are just abysmal, and the documentation is being provided by home-use amateurs because Amazon isn't providing it. I personally find that to be a weak effort on Amazon's part.

Overall, I do like the Echo and Alexa, and I would like to have one if the price were more reasonable. I find it fun and useful. I would also be excited at the opportunity to do more development on Alexa Skills now that I know the process, but it was needlessly painful to learn the ropes compared with other platforms.

Thanks for reading!
- Steven Kitzes


Precompiled Handlebars - A Ridiculously Clear Tutorial

This is a step-by-step tutorial on getting introductory, hello-world level, precompiled Handlebars up and running.

Just so you know, if you're already fluent in NPM or otherwise want to skip ahead, you can jump to the good stuff

It's my first time writing a tutorial like this. I plan to take you through it very slowly, but very clearly, so that even beginners can understand if they have some programming experience, but little to no web or templating experience. If this style of tutorial proves helpful to people, I'll follow up with a second tutorial on more in-depth Handlebars features. I encourage you to send any feedback that will improve this tutorial. Now let's dive in!

For those who read my recent post about a "real" side project, I know I promised a more detailed explanation so I'll tack it on at the end here. ;)

First, some context. What you're reading was born partly of frustration. I had a needlessly difficult time figuring this out for myself, and I want it to be easier for others in the future. Therefore, today's post has become three things: a precompiled Handlebars primer, a learning experience post-mortem, and an experiment in writing effective tutorials for beginners, by recent beginners. Handlebars isn't hard to learn, the concepts are approachable; but existing tutorials, StackOverflow Q&As, even the Handlebars team's own documentation assume you already know web dev and templating, so their tutorials give condensed steps for experts to hit the ground running.

Not much help for a beginner like myself. Therefore...

Handlebars for Beginners, by a Beginner

I'm going to take you through the steps I had to take to get this project moving, from the perspective of someone with beginner-level knowledge of web development, and zero knowledge of templating. I won't leave any of the steps out or assume you know any of them, and I'll try to cover the minutiae and details that many tutorials gloss over, either because they figure you already know them, or because it doesn't even occur to them that you might not know them. I also don't like tutorials that speak only to Windows users when I'm stuck on Unix for work, or vice versa, so I'll try to cover these cross-pollinations as well.

The only thing I won't explain in detail is installing NPM, the Node Package Manager. You'll need to have that running in order to precompile. It is relatively easy to get NPM going, here are installers/binaries for Windows, Mac, and even Linux.

Just so you know, the installer is intended to automatically add NPM to your PATH variable, but in some cases an error may prevent that. I'm tempted to describe the fix for this, but due to the number of operating systems out there, it's beyond the scope of this tutorial. Just be aware of this if you install NPM and it doesn't work, and it's not immediately obvious why.

Brass Tacks

First, you must install Handlebars for Node.js via NPM. From the command line (any directory is fine), issue the following command. Note for Windows users: think of the '$' symbol as your 'C:\>' or other command line prompt.

$ npm install -g handlebars

Now you should be able to run Handlebars on the command line from any directory. To test, try typing the following, and you should be rewarded with your installed Handlebars version, in my case, 4.0.5:

$ handlebars -v


That is all the infrastructure you need. Now I'll take you through the process of building a few files that you'll need. This is one place in particular I find that other tutorials fail to be clear, so I will do my best. First of all, I'll assume for clarity everything is taking place in the same directory, so make yourself a directory and keep all files there. I encourage you to improve your infrastructure and employ best practices, but later. First let's make sure we understand this technology.

We will end up with a total of 4 files:

  1. We will make a Handlebars temmplate file, with the .handlebars file extension (not strictly required, but consider it so for the purposes of this tutorial)
  2. We will make an HTML file to inject our template into
  3. We will download or link to the Handlebars library provided by the Handlebars team
  4. The Handlebars precompiler will output a JavaScript file which performs our template injection (details to follow, don't get intimidated yet if this sounds foreign or complex)

Some folks will likely argue the best practice is to link in your code to an outside source for your Handlebars library (file 3). That may be so, but to make your life easier, I recommend downloading it and keeping a copy locally for the purposes of this tutorial. You can get it here. The version I got is called handlebars-v4.0.5.js.

File 4 is output by the precompiler, so we won't worry about that yet. Let's start by making a very, very simple template (file 1), to inject into our HTML file (file 2). What you need to know for now, is that a Handlebars template is basically made up of a mixture of raw HTML, and special Handlebars tags (discussed later). What that means, is that we can get started with an extremely simple example template containing just HTML, to make sure we have a grasp on the concepts, and that we're doing it right. So here's file 1, in all its young glory:

hello.handlebars File 1
  1. <p>Hello, world!<p>

Yup, that's it for file 1. Precompiling is very simple and straightforward, assuming Handlebars is installed properly. What you need to know for now, is that Handlebars takes .handlebars files and turns them into .js files. So let's use Handlebars to turn hello.handlebars (file 1) into hello.js (file 4):

$ handlebars hello.handlebars -f hello.js

... or more generically ...

$ handlebars <input-file>.handlebars -f <output-file>.js

Note that the Handlebars precompiler doesn't give you any real feedback on the command line. Just check that you got a .js file out of the deal, and that it contains some JavaScript in there, including your HTML. At this point, if all has gone well, we should have the following files, all in the same directory:

  • File 1: hello.handlebars Handlebars template file
  • File 2: not done yet
  • File 3: handlebars-v4.0.5.js or version of your choice
  • File 4: hello.js output from file 1

So all we need to do now is put together a very simple HTML file into which our template will be injected. Note that in this case, I'll be including some JavaScript in my HTML file. Normally, it may be wise to separate your JS from your HTML files, but for simplicity I'll keep it all together for this tutorial. Let's start with a very basic HTML file and build our way up from there. I'll highlight changes at each following step in green, and describe the changes.

hello.html File 2
  1. <html>
  2.   <head>
  3.     <meta charset='UTF-8'>
  4.     <title>Handlebars Tutorial</title>
  5.   </head>
  6.   <body>
  7.     <div id='content'></div>
  8.   </body>
  9. </html>

If you're new to HTML, most of this should still be approachable. Just note the <meta> tag, which tells the browser what character encoding the HTML file uses; and the initially empty <div> tag, which warrants more explanation (but for now, don't overthink it; it's a generic container for part of a page's content).

Nothing really special going on here, if you load this in a browser now, it'll just be a blank page. Just take note that <div id='content'><> is the place in this HTML that I have chosen to inject my Handlebars template, which is why I gave it an ID.

Now I'm going to add in some vanilla JavaScript, whose job is singular: tell the browser not to try doing anything with any scripts, until all of them are loaded. (If you want to do this using jQuery or other more advanced methods, I encourage it, but the point of this exercise is to be simple and to-the-point, so for now, we'll stay vanilla.)

hello.html File 2
  1. <html>
  2.   <head>
  3.     <meta charset='UTF-8'>
  4.     <title>Handlebars Tutorial</title>
  5.   </head>
  6.   <body>
  7.     <div id='content'></div>
  8.     <script type='text/javascript'>
  9.       function init() {
  10.         // This only runs after the whole page is loaded
  11.       }
  12.       window.addEventListener('load', init, false);
  13.     </script>
  14.   </body>
  15. </html>

Simple enough to follow, I think. Now, let's add in the scripts themselves, that the browser will load before running our init() function.

hello.html File 2
  1. <html>
  2.   <head>
  3.     <meta charset='UTF-8'>
  4.     <title>Handlebars Tutorial</title>
  5.   </head>
  6.   <body>
  7.     <div id='content'></div>
  8.     <script type='text/javascript' src='handlebars-v4.0.5.js'>
  9.     </script>
  10.     <script type='text/javascript' src='hello.js'>
  11.     </script>
  12.     <script type='text/javascript'>
  13.       function init() {
  14.       }
  15.       window.addEventListener('load', init, false);
  16.     </script>
  17.   </body>
  18. </html>

Magic Happens

We're going to inject our template with just 3 lines of code. You can do it in fewer, but for clarity I'm spreading this over more lines. I'll explain this process line by line following the new code below:

hello.html File 2
  1. <html>
  2.   <head>
  3.     <meta charset='UTF-8'>
  4.     <title>Handlebars Tutorial</title>
  5.   </head>
  6.   <body>
  7.     <div id='content'></div>
  8.     <script type='text/javascript' src='handlebars-v4.0.5.js'>
  9.     </script>
  10.     <script type='text/javascript' src='hello.js'>
  11.     </script>
  12.     <script type='text/javascript'>
  13.       function init() {
  14.         var target = document.getElementById('content');
  15.         var inject = Handlebars.templates['hello'];
  16.         target.innerHTML = inject();
  17.       }
  18.       window.addEventListener('load', init, false);
  19.     </script>
  20.   </body>
  21. </html>

Line 15: This is the easy part. We are just creating a JS variable to represent the $lt;div> we're going to inject our template into.

Line 16 Here we are grabbing our template, and putting it in a variable called inject. There are a few things here worth knowing. First of all, the reason we have access to the template at all, is because we included the hello.js file we precompiled with Handlebars on the command line (file 4). We have access to the Handlebars variable because it is provided by handlebars-v4.0.5.js, file 3, which we also included. As a convenience, Handlebars allows us to refer to our precompiled script by file name, without needing the extension (e.g. 'hello' instead of 'hello.js'). The last thing that is critical to understand, is that inject is now a function, not a string or other variable type. You must therefore use it as a function, and that function returns the precompiled template as a string.

Line 17: Now that we have our target, and our injection function, we can inject our template into the target. We call our template function, and assign it to the innerHTML property of the target. Done! If all has gone well, you should be able to load your HTML and see your simple HTML template loaded into an otherwise blank page: "Hello, world!"


This write-up has given me a great opportunity to review what I learned in teaching myself Handlebars, and I hope this tutorial, written from the perspective of a web templating beginner, for other beginners, was helpful.

Thanks for reading!
- Steven Kitzes


I promised to describe my side project, so I'll take just a moment to do that. (It'll be a quick moment, because that's all I have to spare at this particular moment, but I want to fulfill my promise of sharing my idea.)

In short, it's a user-generated Choose Your Own Adventure game. Many of us enjoyed these books as kids. The basic idea for those of you who are unfamiliar, is that you read a chapter, the book lets you make a decision for what the characters in the story should do, and you flip to different pages based on your choice and get to see different outcomes. I basically want to create a web-based version of this, where users not only get to see different outcomes based on their choices, but also get to contribute their own branches and paths to the story.

Of course, with users able to generate content, you could potentially have dozens of possible paths out from any given chapter (or 'node' or 'snippet', as I call them). In order to combat the potentially overwhelming number of options available, I plan to implement a voting system, similar to the ones used by Reddit or StackOverflow (but without the complexity of decay, though I may add that in later if needed... not likely). Basically, all paths will be available to any user by request, if they want to read and vote on them; but by default, the user will only be shown the top four most popular paths.

That's all I can give you for now, not because I don't want to share more, but due to time restraints... gotta jet! I've cobbled together a first-pass functional requirements doc for this project, so if you want to learn more, it's public on my project's GitHub page. You can check out all the gory details here!


My First "Real" Side Project

Greetings, old friend, and well met, new acquaintance! You have stumbled across my dev blog at a fortuitous moment in my career. You can think of this as a reboot in many ways.

And I promise, if you read (or skip to the bottom of) this characteristically long-winded post, you'll find that it does, matter of fact, have a purpose.

At the most shallow level, I'm rebooting my blog. The old posts, I'm leaving them up for posterity. Some seem to have helped people, and I'm glad for that, despite a level of quality I'm not, in retrospect, happy with. Oh, well, as they say; oh, well.

At a deeper dive, I'm finding that I have rebooted myself. What manner of melodrama is this? Part of my self-improvement effort being a closer attention to a need for conciseness, I'll describe it thusly: I recently finished my Master's in Comp Sci. I'm psyched about it, in spite of myself, it's been a very trying road. I also recently landed my first real full time job as a software developer, with a fantastic company that I plan to talk about later, pending a discussion with leadership on the propriety of divulging various levels of trade secrets. In short, during the respite I gave myself between graduation and the first day of work, I realized that a day without career development, in some form or another, be it software development, reading, practicing, ideating, etc, was a day that felt in some sense hollow.

Note to self: 'in short'? Who am I kidding? Concision will never feature in this blog. I'm just going to accept that that is 'okay,' and that this is a place not only for sharing knowledge, but also for free expression. So I'll try and just enjoy myself a little writing these up, and hopefully the flavor of my style will stick on the back of your throat in a not wholly unpleasant way.

I digress. Point is, in my spare time, I've come to feel some kind of learning toward a better career as a software developer is not only fulfilling and rewarding professionally (and hopefully financially), but also personally. I find that I enjoy writing software. I prefer fun software, such as a near-complete, but technically in-progress web tool that I started on, and which I shall shamelessly plug posthaste: How Many Drinks In That Drink? I put this little static web app together to help beer snobs track how much alcohol they're really drinking.

This was an incredibly rewarding project for me. Not only because the end result was useful to me personally as a craft beer fan. Not only because others will hopefully find it helpful, possibly in deeply meaningful ways. Not only because it is taking shape better than expected. But also more importantly because I learned a ton about a new software domain and new technologies in building it. This last point was, I have found in-spite-of-myself, probably the most rewarding part of building the tool and the part that has stuck with me the most.

To place this narrative in temporal context, How Many Drinks was made over winter break, basically January of 2016, so yeah, fair to say it's been a while since I worked on a side project; but the experience and the feeling it gave me stuck, and it's been gnawing at me ever since, however gently. Grad school resumed that spring term, delaying my ability to spend time on personal projects, then my month off between school and work was packed, I say packed with travel, and then I fell into this new job, and I fell hard.

Alright, now what manner of melodrama is this? Well, I won't name names because I haven't run the idea of connecting my employer to myself in this blog past my boss just yet; but I will say it's a software consultancy firm with a highly unusual, enthusiastic, and very genuine focus on growing talent internally, rather than burning it out and tossing it on the midden to make space for the next unfortunate soul. I won't get into too many details, but the basic idea is that they take care o' ya in a big way, protecting you from overwork, and making opportunities for you to grow both personally and professionally without taking your life away. Sounds too good to be true, right? Well, I'm not exaggerating in the least when I tell you with honest, genuinely unsarcastic, dead-straight, and seriously intense eye contact that I made it to the final level of on-site interviews at Google, and the interview at my current employer was equally challenging, yet more grueling. Rewards are earned, sometimes, I guess.

That felt a little tangential, but it has a point to it in the emphasis on career development without employee burnout. The basic policy is to keep client work to a sensible level, close to or even occasionally below 40 hours a week, to make space over the span of a reasonable work week schedule to spend on career development. Meaning, you get to work on building technical, business, networking, and other skills on the clock; and it's not only allowed, it's encouraged. I'm trying to think of a more melodramatically eloquent way to put this last, most salient point, but what it basically comes down to is that I'm absolutely psyched to fall into such an opportunity just at the same time I'm learning to really, truly love learning and career development in what I unfortunately have to refer to as a synergistic way. (I know, we all hate the word synergy because it has been so badly misused by middle management for so many years, but this particular synergy is both real, and startlingly pleasant.)

You will soon realize that all of the above has been an elaborate way of leading you to an announcement that I am starting a new project; not only a side project, but a project of passion. I will be blogging about it regularly, discussing what I've learned, what I've achieved, and any pitfalls I've encountered; I set this explicitly for myself as a task. I'll be posting weekly, as the plan has it, both to motivate myself to get work done on it more quickly and also to practice blogging more quickly, frequently, and effectively (and yes, even concisely ... starting next post, as always).

The project itself, which I'll describe in next week's post, is already under way, and it's quite exciting and fun to be making progress on it already (in a sense you may not agree with calling 'progress,' but we'll get to that). In any case, hope to have you along for the ride, and even if you don't care to read up on my trials and tribulations, I hope you'll enjoy the result when it's done!

Thanks for reading!
- Steven Kitzes


Troubleshooting Serial Communications

After two months of troubleshooting, countless gallons of coffee, endless hours of stress, several years off my life, and many square inches of cranial real estate lost to newfound baldness, I finally solved a longstanding, serious blocking bug with a single line of code. I don't know how useful the particulars might be to anyone who stumbles across this posting, but the thought process and methods used to narrow in on a solution might help other fledgling developers by giving them new angles from which to view problems. In any case, I want to document the process for myself, because it has been a very painful ride, and one I'd very much like to think I've learned from.

By way of context, I'm working on enabling two-way communication over serial COM ports using the Windows API. One application, written in C++, needs to communicate with another, written in C#. Both applications existed before this new use case was devised, and both applications were previously proven to successfully communicate over COM ports. However, when the two applications were set up to communicate with each other, the communications failed 100% of the time. I had to figure out why.

It's worth mentioning here that prior to finding myself attached to this project, I'd never been exposed to many of the concepts and technologies involved. I have some limited experience with C++, for instance, but not in a large-scale production environment. I've never worked with serial communications before, and never used the Windows API for C++, which is not as friendly to newcomers as other libraries can sometimes be. Some complexities were also encountered in the realm of threading, which I have some experience with, but not in the Windows environment. A big part of the reason this solution took two months is that I had to learn all of these concepts in parallel in order to start focusing in on the location and nature of the problem.

The first thing I did was to have someone walk me through the existing functionality and where the breakdown was occurring. It was explained and demonstrated to me that the client application, written in C++, was able to send a message to the server application, written in C#. The server reported successful receipt and parsing of the message, and reported a consistent attempt to respond. After that, there would be silence from both applications. The client reported that it was listening for a response from the server, but no response would arrive - or at least, the client did not report receiving it.

So began my adventure. The first thing I did was to look at the client side application and implement some kind of logging functionality. The existing code, having been whipped up in an apparent panic on a tight deadline, provided no logging - in fact, no output of any kind was visible to the user, and no comments or documentation were given in the code to help those who would later have to maintain or extend the application.

It took me several days just to implement logging, and there are a couple of reasons for this. First, as mentioned above I had never worked with C++ before, and learning the ropes on that front is not trivial, especially where file access and streams are concerned. But the bigger stumbling block in this particular case was the unorthodox use of the standard output buffer.

The original author of the client side software had been intercepting all standard output and copying it to a secondary buffer, where it was stored and later passed over a named pipe for use as input to an integrated service. Since there was no documentation of this method of message passing, it took many long, confusing hours and lots of help from more experienced developers for me to come to the understanding that every time I tried to use console output for debugging purposes, it was breaking integrated services because they were not expecting my debug messages to come in over the pipe! So console logging was out, and I had to do any and all logging strictly to disk (or to debug output in Visual Studio, where execution in debug mode was permissible), where the log file could later be read and investigated. Whew...

In any case, I did manage to get a logging feature set up and then it was off to the debugging phase. I set up logging output to report on program status throughout the execution of the client side code, as well as setting breakpoints to investigate the state of suspicious variables. This yielded some curious information.

The first thing I tried was to check the ReadFile(...) command on the client for errors. It turned out that the call to ReadFile(...) was returning error code 0, with a timeout. In other words, no errors were being thrown. The ReadFile(...) call was successful, and all arguments passed to it were valid, but nothing was being received before the given timeout parameter was met. I tried setting the timeout to infinite, and this resulted in the ReadFile(...) command looping, as you might expect, infinitely.

Since no error was being thrown, I assumed that the port was open, and that the client had successfully connected to it. This suspicion was reinforced by the fact that expected data had been verified arriving at the server, as explained above. However, just as a sanity check, I set breakpoints to observe the definition and state of the file handle used by the Windows API to represent and access the COM port itself. I verified, in this way, that the serial port handle carried the same definition and state when listening for a server response, as it did when it was sending a message to the server. As far as the client application was concerned, it was a simple matter of no data coming in over the seemingly valid port.

More sanity checks came over the following weeks. I started building simplified sandbox applications with the dual-purpose of learning, so I would feel more comfortable with the Windows API and COM port communications in general, and also to verify that the code worked as expected, both on the C++ and the C# sides. I built a simple C++ program whose only mission in life was to contact the server with a hardcoded, known-good message, and listen for a (and report the receipt of) a known-good response. It worked! This was my first sigh of relief, but didn't yield any immediate solution.

Keeping my momentum up, I built a simulated server with the same strategy in mind, just to ensure that there wasn't some idiosyncrasy in the existing server code that made it behave oddly. As expected, the simplified, standalone sandbox server I whipped up worked. My sandbox C++ client was able to contact my sandbox C# server over the expected COM port; the C# server responded over the same port; and the C++ client received and reported the server's response. Everything worked! Unfortunately, also as expected, the simple sandbox server also behaved exactly the same as the real server when used in conjunction with the real C++ client.

I felt I was back at square one. I had tried lots of things, and verified little more than the simple fact that everything should have been working! All the C++ code I wrote worked, and all the C# code worked. The ReadFile(...) and WriteFile(...) calls and even the CreateFile(...) calls, and all parameters passed to these functions, were identical - as far as I could tell. I even went so far as to duplicate the code I was using in the sandbox to build the COM port in the production application. This still did not avail.

Then (this morning) something interesting happened. I had been going back and forth between testing my sandbox apps and their production counterparts, and I realized that after running my sandbox app successfully any number of times, any failed execution of the production client permanently broke the COM port, even for future attempts at running the supposedly known-good sandbox app! This didn't make much sense to me, but I also stumbled across the fact that running a C# version of the sandbox client seemed to repair the COM port. Something was so dramatically different between the COM port setup in the C++ and C# applications that not only did the C# application run more reliably, it actually repaired the damage done by the production client, that the sandbox C++ client wasn't able to address on its own.

I did a side-by-side comparison of the code in the C++ and C# applications to see how they set up the COM port (yes, even though I had written both). I saw that the examples I had followed in building the C++ application had only set a few of the DCB parameters during initialization of the COM port. (DCB is a struct in the Windows API that contains all the data needed to manage a COM port.) It only set up the parameters needed to establish a COM port connection under known good conditions. Since this had been working for me under most test conditions, and it hadn't even occurred to me that there were parameters that weren't being set one way or another, I didn't think to look there. And it turned out that yes, there were DCB parameters that I hadn't even known were going uninitialized, because most of the time I didn't need them.

The C# application, on the other hand, had been developed based on a more thorough set of tutorials and examples, and included full initialization of all DCB parameters; most importantly, the DTR (Data Terminal Ready) and the RTS (Request to Send) flags. Setting both of these explicitly to enabled or handshake mode, rather than the default disabled mode, suddenly fixed everything.

Now I felt a broad spectrum of mixed emotions. On the one hand, I was elated that a two-month troubleshooting session had finally, finally, finally come to an end. The problem was solved, and all would rejoice. On the other hand, it took me two months to solve the problem, and there was no rejoicing in that, especially given the simplicity of the fix, which was not even a one-liner. I only had to add a couple of arguments to an existing function call, it was that simple.

No one else on the project seems terribly upset with me, though. After all, each time I asked for help, more experienced developers did not see the solution either, or have a particular idea of where to focus the search. All of us went on many wild goose chases trying to pin this problem down, and it's a relief to have it behind us. Also on the bright side, though the wild goose chases did not, for the most part, yield anything beyond validation of pre-existing suppositions, they did teach me a lot about new ways to troubleshoot, debug, and test assumptions to ensure that you do know what you think you know.

Thanks for reading!
- Steven Kitzes


Virtualizing Ubuntu with VirtualBox in Windows 8.1

After my homework was done tonight, I decided to have a little fun and experiment with virtual machines and their relationships with their hosts. More on my ultimate goal with this experiment later, but it may suffice to say that the necessary means to the end included running a virtual machine with some kind of visual GUI interface with remote desktop access. I chose Ubuntu for this task because it's a free Linux based OS that includes a simple GUI that I'm familiar with.

I had a harder time than expected getting this all set up, for a few reasons. For one, it wasn't easy finding a comprehensive guide on how to do much of this. I did eventually find a very handy guide to setting up the basic Ubuntu VM in Oracle's VirtualBox here. The guide is straightforward for beginners, but there were still sticking points for me that I want to go over.

The first few basic steps to pulling this off (as listed in the guide linked above) are to get VirtualBox, get a copy of Ubuntu in .iso format (32 bit versions are more reliable and easy to work with for this), and install Ubuntu into a new VM in VirtualBox using the .iso. This is, however, already a departure from my initial expectation.

In the past, I've been running VMs from TurnKeyLinux. The fully provisioned TKL system images I've been using came in the form of appliances, which are extremely handy. VirtualBox allows you to import an appliance in a single operation; you don't even really need to configure anything. It will just run the VM with all the software you wanted provisioned. I most recently downloaded a LAMP stack running on a Debian distro that comes out of the box so conveniently pre-configured that you can access phpMyAdmin on the VM from your host in about five minutes from the time you initiate the appliance download. It's really that easy. But I digress.

In my folly, I assumed an appliance would exist for a simple, clean installation of Ubuntu. What I found instead were dead links, shady sources that I didn't fully trust, and an appliance I tried to install that did run, but would hang on any login attempted. It took me a while before I realized, thanks to the tutorial linked above, that there was another way; that being the .iso installation.

So I did that. Lo and behold, using the handy dandy tutorial, I had Linux running in a VM inside ten minutes. But it looked horrible. Absolutely horrible. The resolution was capped at 640x480, and all the window contents were so badly cut off that the GUI was literally unusable.

This was the best I could get after my initial installation.

To make a long story just-a-little-less-long, an Ubuntu VM needs to have its device drivers and system applications installed after the OS is installed into the VM. This is done via the installation, in the case of VirtualBox, of the VirtualBox Guest Additions. But even armed with this knowledge, it's not easy figuring out how to get the blasted things into your VM.

I found some threads suggesting all kinds of terminal commands, sudo apt-get install blah-de-blah-de-blah. It might be due to my version of Ubuntu or for other reasons (I never really figured it out), but none this of this worked for me. I got errors about missing dependencies; I tried to manually install these, only to find absent dependencies chaining together; several steps along that chain I came to a place where a vague message about existing packages preventing dependency installation prompted me to flip tables and head in search of another solution.

Turns out that the Guest Additions themselves are contained on yet another .iso. I went through another minor crisis trying to source and install the appropriate .iso file all from within the still-ailing VM itself, because that's where it needs to be installed. To my chagrin, it turns out the Guest Additions .iso is already there once you get Ubuntu running in a VirtualBox VM. You just spool up your Ubuntu VM, then in the VM view window you visit the Devices menu and select Insert Guest Additions CD image.... Wow. Well. That was easy, eh?

Now, for the long-ago promised revelation of my ultimate goal in all of this. Earlier tonight I dropped a line on Facebook about how I could see it being easy to get lost when you have multiple VM and remote desktop views open on the same machine. A friend and I went back and forth about it until I joked about remoting into the host machine from its own guest VM. The glorious result took a couple of hours to achieve, but the laughs and learning were worth it.


A night well spent.

Thanks for reading!!
- Steven Kitzes