2015-03-26

Troubleshooting Serial Communications

After two months of troubleshooting, countless gallons of coffee, endless hours of stress, several years off my life, and many square inches of cranial real estate lost to newfound baldness, I finally solved a longstanding, serious blocking bug with a single line of code. I don't know how useful the particulars might be to anyone who stumbles across this posting, but the thought process and methods used to narrow in on a solution might help other fledgling developers by giving them new angles from which to view problems. In any case, I want to document the process for myself, because it has been a very painful ride, and one I'd very much like to think I've learned from.

By way of context, I'm working on enabling two-way communication over serial COM ports using the Windows API. One application, written in C++, needs to communicate with another, written in C#. Both applications existed before this new use case was devised, and both applications were previously proven to successfully communicate over COM ports. However, when the two applications were set up to communicate with each other, the communications failed 100% of the time. I had to figure out why.

It's worth mentioning here that prior to finding myself attached to this project, I'd never been exposed to many of the concepts and technologies involved. I have some limited experience with C++, for instance, but not in a large-scale production environment. I've never worked with serial communications before, and never used the Windows API for C++, which is not as friendly to newcomers as other libraries can sometimes be. Some complexities were also encountered in the realm of threading, which I have some experience with, but not in the Windows environment. A big part of the reason this solution took two months is that I had to learn all of these concepts in parallel in order to start focusing in on the location and nature of the problem.

The first thing I did was to have someone walk me through the existing functionality and where the breakdown was occurring. It was explained and demonstrated to me that the client application, written in C++, was able to send a message to the server application, written in C#. The server reported successful receipt and parsing of the message, and reported a consistent attempt to respond. After that, there would be silence from both applications. The client reported that it was listening for a response from the server, but no response would arrive - or at least, the client did not report receiving it.

So began my adventure. The first thing I did was to look at the client side application and implement some kind of logging functionality. The existing code, having been whipped up in an apparent panic on a tight deadline, provided no logging - in fact, no output of any kind was visible to the user, and no comments or documentation were given in the code to help those who would later have to maintain or extend the application.

It took me several days just to implement logging, and there are a couple of reasons for this. First, as mentioned above I had never worked with C++ before, and learning the ropes on that front is not trivial, especially where file access and streams are concerned. But the bigger stumbling block in this particular case was the unorthodox use of the standard output buffer.

The original author of the client side software had been intercepting all standard output and copying it to a secondary buffer, where it was stored and later passed over a named pipe for use as input to an integrated service. Since there was no documentation of this method of message passing, it took many long, confusing hours and lots of help from more experienced developers for me to come to the understanding that every time I tried to use console output for debugging purposes, it was breaking integrated services because they were not expecting my debug messages to come in over the pipe! So console logging was out, and I had to do any and all logging strictly to disk (or to debug output in Visual Studio, where execution in debug mode was permissible), where the log file could later be read and investigated. Whew...

In any case, I did manage to get a logging feature set up and then it was off to the debugging phase. I set up logging output to report on program status throughout the execution of the client side code, as well as setting breakpoints to investigate the state of suspicious variables. This yielded some curious information.

The first thing I tried was to check the ReadFile(...) command on the client for errors. It turned out that the call to ReadFile(...) was returning error code 0, with a timeout. In other words, no errors were being thrown. The ReadFile(...) call was successful, and all arguments passed to it were valid, but nothing was being received before the given timeout parameter was met. I tried setting the timeout to infinite, and this resulted in the ReadFile(...) command looping, as you might expect, infinitely.

Since no error was being thrown, I assumed that the port was open, and that the client had successfully connected to it. This suspicion was reinforced by the fact that expected data had been verified arriving at the server, as explained above. However, just as a sanity check, I set breakpoints to observe the definition and state of the file handle used by the Windows API to represent and access the COM port itself. I verified, in this way, that the serial port handle carried the same definition and state when listening for a server response, as it did when it was sending a message to the server. As far as the client application was concerned, it was a simple matter of no data coming in over the seemingly valid port.

More sanity checks came over the following weeks. I started building simplified sandbox applications with the dual-purpose of learning, so I would feel more comfortable with the Windows API and COM port communications in general, and also to verify that the code worked as expected, both on the C++ and the C# sides. I built a simple C++ program whose only mission in life was to contact the server with a hardcoded, known-good message, and listen for a (and report the receipt of) a known-good response. It worked! This was my first sigh of relief, but didn't yield any immediate solution.

Keeping my momentum up, I built a simulated server with the same strategy in mind, just to ensure that there wasn't some idiosyncrasy in the existing server code that made it behave oddly. As expected, the simplified, standalone sandbox server I whipped up worked. My sandbox C++ client was able to contact my sandbox C# server over the expected COM port; the C# server responded over the same port; and the C++ client received and reported the server's response. Everything worked! Unfortunately, also as expected, the simple sandbox server also behaved exactly the same as the real server when used in conjunction with the real C++ client.

I felt I was back at square one. I had tried lots of things, and verified little more than the simple fact that everything should have been working! All the C++ code I wrote worked, and all the C# code worked. The ReadFile(...) and WriteFile(...) calls and even the CreateFile(...) calls, and all parameters passed to these functions, were identical - as far as I could tell. I even went so far as to duplicate the code I was using in the sandbox to build the COM port in the production application. This still did not avail.

Then (this morning) something interesting happened. I had been going back and forth between testing my sandbox apps and their production counterparts, and I realized that after running my sandbox app successfully any number of times, any failed execution of the production client permanently broke the COM port, even for future attempts at running the supposedly known-good sandbox app! This didn't make much sense to me, but I also stumbled across the fact that running a C# version of the sandbox client seemed to repair the COM port. Something was so dramatically different between the COM port setup in the C++ and C# applications that not only did the C# application run more reliably, it actually repaired the damage done by the production client, that the sandbox C++ client wasn't able to address on its own.

I did a side-by-side comparison of the code in the C++ and C# applications to see how they set up the COM port (yes, even though I had written both). I saw that the examples I had followed in building the C++ application had only set a few of the DCB parameters during initialization of the COM port. (DCB is a struct in the Windows API that contains all the data needed to manage a COM port.) It only set up the parameters needed to establish a COM port connection under known good conditions. Since this had been working for me under most test conditions, and it hadn't even occurred to me that there were parameters that weren't being set one way or another, I didn't think to look there. And it turned out that yes, there were DCB parameters that I hadn't even known were going uninitialized, because most of the time I didn't need them.

The C# application, on the other hand, had been developed based on a more thorough set of tutorials and examples, and included full initialization of all DCB parameters; most importantly, the DTR (Data Terminal Ready) and the RTS (Request to Send) flags. Setting both of these explicitly to enabled or handshake mode, rather than the default disabled mode, suddenly fixed everything.

Now I felt a broad spectrum of mixed emotions. On the one hand, I was elated that a two-month troubleshooting session had finally, finally, finally come to an end. The problem was solved, and all would rejoice. On the other hand, it took me two months to solve the problem, and there was no rejoicing in that, especially given the simplicity of the fix, which was not even a one-liner. I only had to add a couple of arguments to an existing function call, it was that simple.

No one else on the project seems terribly upset with me, though. After all, each time I asked for help, more experienced developers did not see the solution either, or have a particular idea of where to focus the search. All of us went on many wild goose chases trying to pin this problem down, and it's a relief to have it behind us. Also on the bright side, though the wild goose chases did not, for the most part, yield anything beyond validation of pre-existing suppositions, they did teach me a lot about new ways to troubleshoot, debug, and test assumptions to ensure that you do know what you think you know.

Thanks for reading!
- Steven Kitzes

2015-02-22

Virtualizing Ubuntu with VirtualBox in Windows 8.1

After my homework was done tonight, I decided to have a little fun and experiment with virtual machines and their relationships with their hosts. More on my ultimate goal with this experiment later, but it may suffice to say that the necessary means to the end included running a virtual machine with some kind of visual GUI interface with remote desktop access. I chose Ubuntu for this task because it's a free Linux based OS that includes a simple GUI that I'm familiar with.

I had a harder time than expected getting this all set up, for a few reasons. For one, it wasn't easy finding a comprehensive guide on how to do much of this. I did eventually find a very handy guide to setting up the basic Ubuntu VM in Oracle's VirtualBox here. The guide is straightforward for beginners, but there were still sticking points for me that I want to go over.

The first few basic steps to pulling this off (as listed in the guide linked above) are to get VirtualBox, get a copy of Ubuntu in .iso format (32 bit versions are more reliable and easy to work with for this), and install Ubuntu into a new VM in VirtualBox using the .iso. This is, however, already a departure from my initial expectation.

In the past, I've been running VMs from TurnKeyLinux. The fully provisioned TKL system images I've been using came in the form of appliances, which are extremely handy. VirtualBox allows you to import an appliance in a single operation; you don't even really need to configure anything. It will just run the VM with all the software you wanted provisioned. I most recently downloaded a LAMP stack running on a Debian distro that comes out of the box so conveniently pre-configured that you can access phpMyAdmin on the VM from your host in about five minutes from the time you initiate the appliance download. It's really that easy. But I digress.

In my folly, I assumed an appliance would exist for a simple, clean installation of Ubuntu. What I found instead were dead links, shady sources that I didn't fully trust, and an appliance I tried to install that did run, but would hang on any login attempted. It took me a while before I realized, thanks to the tutorial linked above, that there was another way; that being the .iso installation.

So I did that. Lo and behold, using the handy dandy tutorial, I had Linux running in a VM inside ten minutes. But it looked horrible. Absolutely horrible. The resolution was capped at 640x480, and all the window contents were so badly cut off that the GUI was literally unusable.

This was the best I could get after my initial installation.


To make a long story just-a-little-less-long, an Ubuntu VM needs to have its device drivers and system applications installed after the OS is installed into the VM. This is done via the installation, in the case of VirtualBox, of the VirtualBox Guest Additions. But even armed with this knowledge, it's not easy figuring out how to get the blasted things into your VM.

I found some threads suggesting all kinds of terminal commands, sudo apt-get install blah-de-blah-de-blah. It might be due to my version of Ubuntu or for other reasons (I never really figured it out), but none this of this worked for me. I got errors about missing dependencies; I tried to manually install these, only to find absent dependencies chaining together; several steps along that chain I came to a place where a vague message about existing packages preventing dependency installation prompted me to flip tables and head in search of another solution.

Turns out that the Guest Additions themselves are contained on yet another .iso. I went through another minor crisis trying to source and install the appropriate .iso file all from within the still-ailing VM itself, because that's where it needs to be installed. To my chagrin, it turns out the Guest Additions .iso is already there once you get Ubuntu running in a VirtualBox VM. You just spool up your Ubuntu VM, then in the VM view window you visit the Devices menu and select Insert Guest Additions CD image.... Wow. Well. That was easy, eh?

Now, for the long-ago promised revelation of my ultimate goal in all of this. Earlier tonight I dropped a line on Facebook about how I could see it being easy to get lost when you have multiple VM and remote desktop views open on the same machine. A friend and I went back and forth about it until I joked about remoting into the host machine from its own guest VM. The glorious result took a couple of hours to achieve, but the laughs and learning were worth it.

Ridiculous.


A night well spent.

Thanks for reading!!
- Steven Kitzes

2015-02-13

A Brief Study of the Independent Set Problem

Prologue

I normally prefer to share posts of a more practical nature, but in celebration of a hard-fought semester fraught with theory, research, and not a whole lot of actual development in the curriculum, I've decided to share some of what I learned even if it isn't directly applicable to the kind of day-to-day development most of us face on the job.

One of my most vicious classes last semester was combinatorial algorithms, a course focusing on algorithm design and analysis with particular attention to time complexity. Needless to say, computers and programming didn't factor into the curriculum at all. This was strictly a math class. Our big assignment for the semester was to pick an existing problem in the math domain, describe it in detail, and analyze a variety of the known approaches to solving it.

I chose the independent set family of problems, and the professor - considered one of the most brutal on campus - wrote me to congratulate me on a flawless paper. So I figured it must not have been too shabby, and I'll pass it on to the masses! Now, let's talk about independent sets!

Introduction: What is an Independent Set?

If you already know about graphs, and what an independent set is, you can skip ahead, but a little bit of background knowledge is needed to understand what an independent set is. Fair warning, it would be quite helpful to know a little bit about graphs, time complexity, and dynamic programming (or at least recursion). But I'll do my best to make this digestible with a minimum of prerequisite knowledge.

The very least you need to know is that a graph is made of vertices and edges, so that the edges connect the vertices together, like connect-the-dots. You can have a graph with any number of vertices, connected to each other in any number of ways by any number of edges. We call the group of vertices together a set of vertices, and the group of edges is also a set. Any group of items that belongs to a set (but might not include all of the items in that set) is called a subset.

So, if we have a graph, let's call it G, composed of vertex set V and edge set E, an independent set within that graph is a subset of the vertices in G where none of the vertices in the subset are connected by any edge. In other words, an independent set of vertices in a graph is a subset of the graph's vertices with no two vertices adjacent.

Two important types of independent set we will talk about below include the maximal independent set and the maximum independent set (they are sometimes, but not always, different from each other). We will also discuss what a graph’s independence number is. For the most part, we're only concerned with maximum independent sets and independence numbers, but I want to talk about maximal independent sets because that will help us to understand just what a maximum independent set is.

A maximal independent set in G is a type of independent set. In a maximal independent set, if you add any vertex from the total vertex set V, that wasn't already in the subset, that would force an adjacency into the subset. Remember, if we have two adjacent nodes in our graph, it's not independent. So a maximal independent set is a subset of the vertices in G that can't have any more of G's vertices added without stopping the subset from being independent.

There are two things I find worth noting about maximal independent sets. First, a given graph might have any number of maximal independent sets. Second, each maximal independent set might have a different total cardinality (number of vertices in the subset). In other words, a single graph might contain multiple maximal independent sets, each of varying size. The largest possible one of these is called the maximum independent set. This is the largest independent set found in a given G.

Note also that a graph can have multiple maximum independent sets, but in this case, all of the maximum independent sets will have the same cardinality.

Finally, whether there is only one maximum independent set, or there are many, we refer to the cardinality of maximum independent sets as the independence number of the graph to which they belong.

We'll use these terms and concepts more below to discuss a variety of problems relating to independent sets, most importantly to my research, the maximum independent set problem and the independent set decision problem

Details: Problem Formulation and Motivation

As we noted above, the maximum independent set problem takes a graph as input, G = (V, E). The goal is to find a subset of V comprising one maximum independent set on G, which might just be one of several maximum independent sets in G. The solution can then be used to obtain the answer to the independence number problem, which takes the same input (a graph) but seeks, instead of a maximum independent set, the independence number of the graph (the number of vertices in the maximum independent set). All we have to do to get the independence number once we have the maximum independent set is return the cardinality of the maximum independent set, and we're done. Along the same lines, the solution to the maximum independent set problem also comes packaged with the solution to the independent set decision problem, which is looking for a simple Boolean value: true if the graph’s independence number is greater than or equal to some given number, k, and false otherwise.

In other words, we can formally define the independent set decision problem as taking a graph, G = (V, E) and an integer k, and returning a Boolean value, true if k is less than or equal to G’s independence number, or false if k is greater than that independence number. In the simplest terms, we have a number, and we want to know if there is an independent set in some graph that is as big as our number.

Input: a graph G = (V, E), and an integer k Output: a Boolean true or false value

Believe it or not, these versions of the same problem – maximum independent set, independence number problem, and independent set decision – each has a specific real world application. The maximum independent set and independence number problems in particular have a wide variety of practical uses. The maximum independent set, for instance, is a problem that materializes in visual pattern recognition, molecular biology, and certain scheduling applications; and the independence number problem has applications in molecular stability prediction and network configuration optimization.

Astute readers will notice that I've omitted the very specific subject of the independent set decision problem, which (to refresh our memories) seeks only a Boolean value and no other data. This member of the independent set family of problems is not considered to have any practical "real world" application. That said, it plays a crucial role in the realm of theoretical research. It is considered necessary in order to apply the theory of NP-completeness to problems related to independent sets. That's a topic for a whole other kind of blog post.

Though the topic of NP-Completeness can get pretty hairy, the dominating factor of the time complexity of solving the decision version of this problem is the same as for the independence number problem or the maximum independent set problem. As mentioned above, the decision version of the problem uses the same underlying algorithm as the independence number problem, and simply examines the resulting data to a different end. In short, to get the solution to the decision problem, we reduce the independence number solution to a Boolean value. The independence number problem itself is the same as the maximum independent set problem, with the output reduced to a single value representing the cardinality of a maximum independent set in G. But we're getting a bit beside the point here.

Since the domains of practical applications defer in large part to the maximum independent set and independence number versions of this problem, this is where the vast majority of interesting research on the topic of independent sets has been done. The complexity of the algorithms to solve all three problems is dominated overwhelmingly by the operations performed in the original maximum independent set problem. The transformations we have to make to produce solutions to the other versions of the problem are pretty trivial (counting cardinality, or comparing a given k value to the independence number to determine a Boolean value). For that reason, we'll focus on the research surrounding the maximum independent set problem.

The Brute Force Approach – Slow but Simple

Now let's talk about how we actually solve this problem. The brute force approach to the maximum independent set problem is very simple in principle. In short, all we need to do is iterate over all possible subsets in G, and then for each subset, check all vertices v in V for pair-wise adjacency. The time complexity of checking each possible subset is O(2n), yielding an exponential time complexity, and the checking of all vertices yields an additional (relatively insignificant) polynomial factor to the complexity, referred to in this case as poly(n), where n is, of course, the number of vertices in G. We consider the polynomial factor insignificant because the complexity of any polynomial factor will never dominate the exponential factor of O(2n), and so becomes irrelevant for large graphs.

In summary, for each vertex subset S in V, we check all vertices in S for adjacencies. If an adjacency exists, this subset can be thrown out because the adjacency violates the defintion of an independent set. As we iterate over subsets, we simply measure each and track which is the largest. For maximum independent set, we return the independent set that ends up providing the largest number of vertices over all iterations (this being the largest, read maximum independent set). This ends up taking a total of O(2npoly(n)) time. For the independence number problem, instead of returning the largest independent set, we return only the cardinality of the largest independent set, which we can trivially store while iterating over subsets. Finally, in the case of the independent set decision problem, we simply compare the independence number gained via the above brute force algorithm, trivially compare it against our given k, and return the appropriate Boolean value, yielding a worst case that is still O(2npoly(n)) time.

Serious Business: Strategy and Analysis of an Improved Algorithm

Before we get too excited, it's only fair to say right up front that we are not going to beat exponential time complexity with any known algorithm for solving this problem. That said, a fair amount of research has been done on it, yielding impressive improvements to the base of exponentiation. This, in turn has led to vastly improved run times in real world applications that rely on solutions to the independent set family of problems. (We're going to start getting very technical here.)

Following Jeff Erickson’s guide to the optimization of the independence number algorithm, we see that there are a series of steps that can be taken in turn to improve the performance of our algorithm. As a first step toward understanding the complicated means of optimization, Erickson provides a recursive formulation of an algorithm for obtaining the independence number on a graph, G (shown below). Next, he shows that the worst case scenario for that recursive algorithm can be split into subcases, some of which are redundant recursive calls that do not need to be performed in duplicity. Eliminating these duplicate cases, obviously, improves run time complexity. Finally, he shows that this improvement can be repeated on the resultant worst case, splitting that into further subcases, some of which are also redundant; and so on and so forth, continually improving the run time complexity in seemingly small, but ultimately meaningful, increments. To begin, observe Erickson’s most elementary version of the algorithm in its recursive form:

MaximumIndSetSize(G):
if G = { empty set }
return 0
else
v = any node in G
withv = 1 + MaximumIndSetSize( G \ N(v) )
withoutv = MaximumIndSetSize( G \ {v} )
return max { withv, withoutv }

To clarify the notation here, N(v) refers to the neighborhood of v, meaning the set that includes v and all of its adjacent neighbors, but no other vertices. The backslash symbol refers to set-wise exclusion or "subtraction." For example:
{ a, b, c } \ { b } yields { a, c }.

As you can see at a glance, the number of recursive calls at each iteration is doubled! This yields the same exponential growth of time complexity we saw in the brute force approach. What’s worse, we see that in the worst case, G \ N(v) = G \ {v}, meaning that v has no neighbors! This means that in both recursive calls at each iteration, our vertex subset is only being reduced by size 1. The result is the recurrence equation T(n) = 2T(n – 1) + poly(n), yielding a time complexity of O(2npoly(n)). T(x) here represents the time required to solve a problem of size x.

At this stage, though no clear improvement has yet been made, we already have the information we need to make our first improvement to the algorithm. As noted above, in the worst case, G \ N(v) = G \ {v}, meaning that v has no neighbors. However, if v has no neighbors, then it is guaranteed to be included in every maximal independent set, including maximum independent sets, because a node that never has neighbors is always independent! As a result, one of the recursive calls, the one assigning the value withoutv, becomes superfluous, because no maximum independent set can exclude v if it never has neighbors in any subset. At the same time (but on the other hand), if v does have at least one neighbor, then G \ N(v) will have at most (n – 2) vertices. That's quite a brain-full, so let me explain it another way, with concrete examples.

Let's say G has 10 nodes total. So n is 10. Then, if our random node v has at least 1 neighbor, then the neighborhood of v is size 2 or greater. Since N(v) is 2 or greater, then G \ N(v) is 10, minus some number 2 or greater. This line of reasoning yields a new recursive formula with a dramatically improved run time complexity:

T(n) ≤ O(poly(n)) + max{T(n - 1), [T(n - 1) + T(n - 2)]}

The run time complexity on this improved version of the recursive algorithm is reduced to T(n) ≤ O(1.61803398875n) after just one subcase division! That might not seem like much of an improvement, but in a little bit we're going to see some concrete examples of just how big an improvement this can make at execution time. In the meantime, following this line of logic and making further observations, we can immediately improve on our new worst case by splitting that into subcases – some of which are, again, redundant and superfluous.

Note that our new worst case is the case in which both T(n – 1) and T(n – 2) must be calculated. In other words, this is the case in which our recursive calls are doubled, and in which the graphs we pass along are of relatively large size (only 1 or 2 vertices smaller than the graph from which they were derived). In this specific case, v has exactly 1 neighbor; let's call this neighbor w. Specifically because v has exactly 1 neighbor, we will find that either v or w will appear in every maximal independent subset of our original G. Think about it: if w is in a maximal independent set, v simply can't be, because they're neighbors, and two neighbors can't both be in the same independent set. Conversely, if w is not in a maximal independent set, v will be, because its only neighbor is not in the set, which frees v up to join without fear of standing by an adjacent neighbor. Taking this line of reasoning yet another step further, we see that given a maximal independent set containing w, we can remove w and instead include v. Therefore, we can conclude that some maximal independent set will include vertex v (whether this maximal independent set ends up being maximum or not).

Note also that if we know there is a strict pair-wise relationship between v and w (due to the given fact that v has 1 adjacent neighbor), we never have to evaluate the recursive case in which the call MaximumIndSetSize(G \ {v}) needs to be made. Our case evaluating T(n – 1) + T(n – 2) becomes only T(n – 2). Furthermore, expanding on our idea from the last worst-case scenario subcase split, we will see that if the degree of v is 2 or more, then N(v) will contain at least 3 vertices, and therefore G \ N(v) will have at most (n – 3) vertices. In such a case, the max{...} function of our recursive algorithm is expanded:

T(n) ≤ O(poly(n)) + max{
T(n - 1),
T(n - 2),
T(n - 1) + T(n - 3)
}

This second improvement yields a time complexity of T(n) ≤ O(1.46557123188n), a smaller but still valuable improvement. On the other hand, note that the complexity of the algorithm is growing rapidly here. Not the time complexity, but the algorithmic complexity, as well as the difficulty we find in determining where to make future optimizations. At each step we take toward a faster algorithm, things are getting much more complicated. At this point, the complexity of the algorithm and the steps required to make further improvements go beyond the scope of what I can talk about in a blog post (even one as ridiculously long and complicated as this). However, Erickson does go on to show that further clever observations can be made continuously, yielding even better time complexities. Two further observations of worst-case splitting and problem structure yield time complexities T(n) ≤ O(1.44224957031n) and T(n) ≤ O(1.3802775691n), respectively. According to Erickson, the best published time complexities for this problem are solutions quoted by Fomin, who achieved T(n) ≤ O(1.2210n, and Bourgeois, who managed to achieve an impressive T(n) ≤ O(1.2125n).

Considering we're still ultimately stuck with exponential time, it may seem like a lot of work needs to be done for miniscule returns. After all, 1.221 and 1.2125 are almost the same number, right? On the other hand, certain practical applications demand tremendous effort, and therefore any ounce of efficiency that an improved algorithm can provide becomes valuable. Larson includes as an example in his writings a reference to the prediction of molecular stability in the tetrahedral C100 isomer. For this example, the time complexity T(n) of determining the independence number of this molecule is T(100). Therefore, working out the arithmetic, we see the run time for this example reduced as follows (ignoring poly(n):

Algorithm Time Complexity Run Time
Brute Force O(2100) 1.26765e30
Best method described by Erickson O(1.3802775691100) 9.923e13
Fomin et al O(1.221100) 4.69425e8
State-of-the-art described by Robson O(1.1889100) 3.26989e7

With these concrete numbers to reference, we can see clearly that the jump from the brute force method to even a slightly optimized solution is more than worth the effort. The first jump alone results in an improvement to run time of many orders of magnitude! After that, we admittedly begin to see diminishing returns for the effort taken to develop improved algorithms. For instance, I personally doubt that the extra effort by Bourgeois to beat Fomin by a mere 0.0085 on the exponentiation base was of debatable value as anything more than a stepping stone. In my personal opinion, the return on invested effort put in to develop Robson’s final state-of-the-art solution (discussed below) was not worth it for such a small gain from Fomin's algorithm. At least, not in the case of the C100 stability prediction problem. That said, 100 may be considered a small quantity of vertices in some problem domains, such as large network configuration optimization or scheduling problems. In those cases, it may actually be worth it to go to the trouble of researching and developing the absolute best state-of-the-art in terms of fast, efficient algorithms.

Robson's Incredible State of the Art, Conclusions

Research has yielded relatively fast algorithms for solving the independent set family of problems. Some, notably by Bourgeois, are capable of a time complexity as low as a flabbergasting T(n) ≤ O(1.0854n) time complexity using strategies of case analysis and reduction rules. However, it's worth noting that algorithms that are as fast as these come with severe limitations. For example, the Bourgeois algorithm is limited to graphs with average vertex degree 3. That is a serious limitation! The fastest published algorithm for solving the independent set family of problems for generic graphs (with unbounded number of degrees), by Bourgeois, runs in only T(n) ≤ O(1.2125n).

The absolute state of the art for solving the independent set family of problems is actually a computer-generated algorithm. The algorithm is described in an unpublished technical report by Robson, and is claimed to boast a time complexity of only O(1.1889n). However, the trade-off at this level of acceleration is mind-boggling algorithmic complexity. Just the high level description of the algorithm, excluding the rest of the accompanying report, fills fully 15 pages, and thus excludes discussion of the algorithm in any depth from the scope of most academic papers, let along blogs. I'm not an expect on algorithmic analysis at quite that level, but I'd wager such an algorithm would even be excluded from use in applications at that point, due to its sheer complexity.

As algorithms grow in efficiency, but also in complexity, it gets tough to make universal recommendations on which to use – or, in the case of the algorithms described above, how deep through the iterations of optimization a project team should go to try and improve the performance of their algorithm. The context of the project would have to come into factor.

For projects dealing with small graphs, it wouldn't be worth the effort to interpret and implement something like Robson's solution. Something like the example described by Erickson and outlined above would probably be good enough. For projects dealing with graphs of moderate size and predictably small average vertex degrees throughout, but needing accelerated solutions, something like the algorithm developed by Bourgeois would provide a superior middle ground.

Thanks for reading!
- Steven Kitzes