2014-02-22

Theory, Practice, and Balance

When I landed my first software gig almost four years ago, I was a self-taught programmer. Not to say that I didn't have help along the way; I had loads of tutoring and mentoring from friends. Point being, it was my own motivation that kept me going, and my own intuition that determined the direction my learning process would take. I had no formal training, schooling or guidance to keep me on track or ensure that I'd learn any particular skill set or theory in any kind of organized way.

Especially in the beginning, before I was ever employed as a developer, my goal was to get to a point where my learning could become a self-sufficient process. For example, an early task I assigned myself was to learn enough about a programming language to get "hello, world" on the screen so I'd have something to build on from there. I explored IDEs and debuggers so I could figure out what was breaking in my rudimentary programs. I dove into docs, grabbed third party libraries to help, and set up RTEs. I don't think many in the community would argue that these aren't useful skills, if not important, maybe even essential skills. But essential or not with respect to any kind of career path, I was learning them haphazardly based on immediate need.

By the time I started at my first job, I had made it to the point where I could jump onto a project and start delivering serviceable work within a few days, but my code wasn't what you might call optimal. It became quickly evident that there were a lot of gaps in my knowledge, skills, and experience that were holding me back. I understood, more or less, how to solve simple problems with code, how to compile and debug; I even stuck to some common best practices in program design from what I had picked up along the way. But I sure didn't know anything about math, theory, or architecture. Set theory, data structures, state machines, binary trees, Boolean algebra, recursion; even stats, calc, the supposedly basic stuff, linear algebra; let alone dynamic programming, approximation, or any of the other heavy theory you'll stagger through in upper division computer science coursework. At the time, I just knew how to code, even if the code came out slightly more organized than a trash can filled with spaghetti, and hey - for that first job, it was enough.

I know from experience that this is a common attitude for some self-taught programmers. "Hey, it compiles, and it works for most cases, what do you want?" Well, depending who you want to work for and what you want to do with your life, there's plenty there to want, especially as the low-hanging fruit of the software development world is phased out of the career curriculum almost entirely in place of more challenging architectural and mathematically challenging problems.

Now that I'm in the thick of an academic Computer Science program to fill some of the early gaps in my knowledge, I'm inundated in the polar opposite of a practicality-focused paradigm. This semester, for example, I'm taking classes in discrete math, formal proofs, and logic; set and language theory and automata; and operating system theory. So far, over a month into the semester, not a single line of code has been requested or demonstrated by any of my professors. In all honesty, hyperbole and exaggeration aside, a computer has not even been mentioned in the classroom. Where is the code? The practice? The application? Over a year into the curriculum and no one has so much as mentioned a debugger, spoken a word on environments, given a nod to APIs or anything of the sort. They are completely ignoring the fact that one day, presumably, we'll need to actually apply all this math and theory to something. They are training us all to be mathematicians and PhD candidates.

Both of the above situations - the plight of the inexperienced CompSci grad, and the crude hacking style of the common self-taught developer - probably sound familiar to anyone who has spent significant time on the job as a professional developer. Even with my limited experience and time in the field, I've met both. Employers have complaints about these two types of entry level job candidates, and I think they are valid points to make. No one wants to hire a kid who can technically write a program, but can't for the life of him do a good job of it because he has never considered testing, or any kind of process, or software engineering principles, or the fact that someone - maybe even him- or herself - is going to have to maintain that code one day. On the other hand, it's rarely a good idea to hire a math whiz with a CompSci degree who has, ironically, no clue how to open Excel, let alone IDEs. I have had professors who prototype in notepad.exe and teach three-generation-old UI libraries because it's what they know best and they're too lazy to keep up on the tech and it's too easy to give the excuse that hey, sometimes you have to maintain legacy code. True as that may be, and though it may be a separate issue, it's part of the larger problem. At any rate, what does it tell you about the practical skills of the resultant graduates?

I hear complaints from employers that CompSci grads too often come out of school knowing such-and-such theorem and So-and-So's Law but with no idea how to use any of it in the workplace; and that self-taught programmers with drive to succeed have taught themselves how to compile and debug but haven't the slightest clue how to improve their algorithms - or, heaven forbid, toss in a comment here and there. So, what do we do about it?

I understand that opinions are a dime a dozen and my commentary is a drop in the bucket of sentiment on this topic, but I have lately felt the need to share it anyway, because these issues have impacted me both as a member of the workforce, and as a self- and university-taught developer. It seems, frankly, outrageous that more effort isn't being put into a combined emphasis on real-world application and theory. Students have to be given some kind of bigger picture with hands-on experience, so that they can connect the theory to the application. A few schools do seem to be getting it; I have one friend who graduated from a technical college with boatloads of practical experience, in addition to a detailed understanding of the math and theory on which best practices and problem solving are founded. But this guy is a painfully rare exception. I, for example, would never have learned how to set up my machine and get myself jump-started on a project without my own extracurricular work, industry experience, and attention from concerned personal contacts. Not to say that students shouldn't be doing any extracurricular work, but leaving the critical element of hands-on experience out of the schooling process by policy is counterproductive to the ultimate goal of schooling: which is preparing students for the real world.

As for the self-taught developer and self-driven learner, the onus is on the community to give a sense of importance to the concepts underlying best practices and solutions to complex computing problems. When someone green behind the ears comes onto a forum to ask a question, instead of dismissing it as stupid, or shunning them with a condescending LMGTFY GTFO, or telling them to just do their project in an easier language, it is up to those with more experience to guide them to better solutions and opportunities for self-education. Otherwise, who do we have to blame when our co-workers are writing hacked up code that we have to fix for them?

Thanks for reading!
- Steven Kitzes

No comments: