Have a question? Email Us or Call 1 (800) 936-2134
SubMain - CodeIt.Right The First Time! Home Products Services Download Purchase Support Community
 Saturday, 05 November 2016
blog-so-you’ve-inherited-a-legacy-codebase

During my younger days, I worked for a company that made a habit of a strategic acquisition.  They didn't participate in Time Warner style mergers, but periodically they would purchase a smaller competitor or a related product.  And on more than one occasion, I inherited the lead role for the assimilating software from one of these organizations.  Lucky me, right?

If I think in terms of how to describe this to someone, a plumbing analogy comes to mind.  Over the years, I have learned enough about plumbing to handle most tasks myself.  And this has exposed me to the irony of discovering a small leak in a fitting plugged by grit or debris.  I find this ironic because two wrongs make a right.  A dirty, leaky fitting reaches sub-optimal equilibrium, and you spring a leak when you clean it.

Legacy codebases have this issue as well.  You inherit some acquired codebase, fix a tiny bug, and suddenly the defect floodgates open.  And then you realize the perilousness of your situation.

While you might not have come by it in the same way that I did, I imagine you can relate.  At some point or another, just about every developer has been thrust into supporting some creaky codebase.  How should you handle this?

Put Your Outrage in Check

First, take some deep breaths.  Seriously, I mean it.  As software developers, we seem to hate code written by others.  In fact, we seem to hate our own code if we wrote it more than a few months ago.  So when you see the legacy codebase for the first time, you will feel a natural bias toward disgust.

But don't indulge it.  Don't sit there cursing the people that wrote the code, and don't take screenshots to send to the Daily WTF.  Not only will it do you no good, but I'd go so far as to say that this is actively counterproductive.  Deciding that the code offers nothing worth salvaging makes you less inclined to try to understand it.

The people that wrote this code dealt with older languages, older tooling, older frameworks, and generally less knowledge than we have today.  And besides, you don't know what constraints they faced.  Perhaps bosses heaped delivery pressure on them like crazy.  Perhaps someone forced them to convert to writing in a new, unfamiliar language.  Whatever the case may be, you simply didn't walk in their shoes.  So take a breath, assume they did their best, and try to understand what you have under the hood.

Get a Visualization of the Architecture

Once you've settled in mentally for this responsibility, seek to understand quickly.  You won't achieve this by cracking open the code and looking through random source files.  But, beyond that, you also won't achieve it by looking at their architecture documents or folder structures.  Reality gets out of sync with intention, and those things start to lie.  You need to see the big picture, but in a way that lines up with reality.

Look for tools that map dependencies and can generate a visual of the codebase.  Plenty of these tools exist for you and can automate visual depictions.  Find one and employ it.  This will tell you whether the architecture resembles the neat diagram given to you or not.  And, more importantly, it will get you to a broad understanding much more quickly.

Characterize

Once you have the picture you need of the codebase and the right frame of mind, you can start doing things to it.  And the first thing you should do is to start writing characterization tests.

If you have not heard of them before, characterization tests have the purpose of, well, characterizing the codebase.  You don't worry about correct or incorrect behaviors.  Instead, you accept at face value what the code does, and document those behaviors with tests.  You do this because you want to get a safety net in place that tells you when your changes affect inputs and outputs.

As this XKCD cartoon ably demonstrates, someone will come to depend on the application's production behavior, however problematic.  So with legacy code, you cannot simply decide to improve a behavior and assume your users will thank you.  You need to exercise caution.

But characterization tests do more than just provide a safety net.  As an exercise, they help you develop a deeper understanding of the codebase.  If the architectural visualization gives you a skeleton understanding, this starts to put meat on the bones.

Isolate Problems

With a reliable safety net in place, you can begin making strategic changes to the production code beyond simple break/fix.  I recommend that you start by finding and isolating problematic chunks of code.  In essence, this means identifying sources of technical debt and looking to improve, gradually.

This can mean pockets of global state or extreme complexity that make for risky change.  But it might also mean dependencies on outdated libraries, frameworks, or APIs.  In order to extricate yourself from such messes, you must start to isolate them from business logic and important plumbing code.  Once you have it isolated, fixes will come more easily.

Evolve Toward Modernity

Once you've isolated problematic areas and archaic dependencies, it certainly seems logical to subsequently eliminate them.  And, I suggest you do just that as a general rule.  Of course, sometimes isolating them gives you enough of a win since it helps you mitigate risk.  But I would consider this the exception and not the rule.  You want to remove problem areas.

I do not say this idly nor do I say it because I have some kind of early adopter drive for the latest and greatest.  Rather, being stuck with old tooling and infrastructure prevents you from taking advantage of modern efficiencies and gains.  When some old library prevents you from upgrading to a more modern language version, you wind up writing more, less efficient code.  Being stuck in the past will cost you money.

The Fate of the Codebase

As you get comfortable and take ownership of the legacy codebase, never stop contemplating its fate.  Clearly, in the beginning, someone decided that the application's value outweighed its liability factor, but that may not always continue to be true.  Keep your finger on the pulse of the codebase, while considering options like migration, retirement, evolution, and major rework.

And, finally, remember that taking over a legacy codebase need not be onerous.  As initially shocked as I found myself with the state of some of those acquisitions, some of them turned into rewarding projects for me.  You can derive a certain satisfaction from taking over a chaotic situation and gradually steer it toward sanity.  So if you find yourself thrown into this situation, smile, roll up your sleeves, own it and make the best of it.

Related resources

Tools at your disposal

SubMain offers CodeIt.Right that easily integrates into Visual Studio for flexible and intuitive automated code review solution that works real-time, on demand, at the source control check-in or as part of your build.

Learn more how CodeIt.Right can identify technical debt, document it and gradually improve the legacy code.

About the Author

Erik Dietrich

I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

    posted on Saturday, 05 November 2016 10:43:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Wednesday, 19 October 2016

    The balance among types of feedback drives some weird interpersonal dynamics and balances.  For instance, consider the rather trite (if effective) management technique of the "compliment sandwich."  Managers with a negative piece of feedback precede and follow that feedback with compliments.  In that fashion, the compliments form the "bun."

    Different people and different groups have their preferences for how to handle this.  While some might bend over backward for diplomacy others prefer environments where people hurl snipes at one another and simply consider it "passionate debate."  I have no interest arguing for any particular approach -- only in pointing out the variety.  As it turns out, we humans find this subject thorny.

    To some extent, this complicated situation extends beyond human boundaries and into automated systems.  While we might not take quite the same umbrage as we would with humans, we still get frustrated.  If you doubt this, I challenge you to tell me that you have never yelled at a compiler because you were sure your code had no errors.  I thought so.

    So from this perspective, I can understand the frustration with static analysis feedback.  Often, when you decide to enable a new static analysis engine or linting tool on a codebase, the feedback overwhelms.  28,326 issues the code can demoralize anyone.  And so the temptation emerges to recoil from this feedback and turn off the tool.

    But should you do this?  I would argue that usually, you should not.  But situations do exist when disabling a static analyzer makes sense.  Today, I'll walk through some examples of times you might suppress such a warning.

    False Positives

    For the first example, I'll present something of a no-brainer.  However, I will also present a caveat to balance things.

    If your static analysis tool presents you with a false positive, then you should suppress that instance of the false positive.  (No sense throwing the baby out with the bathwater and suppressing the entire rule).  Assuming that you have a true false positive, the analysis warning simply constitutes noise and not signal.  Get rid of it.

    That being said, take care with labeling warnings as false positives.  False positive means that the tool has indicated a problem and a potential error and gotten it wrong.  False positive does not mean that you disagree with the warning or don't care.  The tool's wrongness is a good reason to suppress -- you not liking its prognosis false short of that.

    Non-Applicable Code

    For the second kind of instance, I'll use the term "non-applicable code."  This describes code for which you have no interest in static analysis warnings.  While this may sound contradictory to the last point, it differs subtly.

    You do not control all code in your codebase, and not all code demands the same level of scrutiny about the same concepts.  For example, do you have code in your codebase driven by a framework?  Many frameworks force some sort of inheritance scheme on you or the implementation of an interface.  If the name of a method on a third party interface violates a naming convention, you need not be dinged by your tool for simply implementing it.

    In general, you'll find warnings that do not universally apply.  Test projects differ from your production code.  GUI projects differ from data access layer ones.  And NuGet packages or generated code remain entirely outside of your control.  Assuming the decision to use these things happened in the past, turning off the analysis warnings makes sense.

    Cosmetic Code Counter to Your Team's Standard

    So far, I've talked about the tool making a mistake and the tool getting things right on the wrong code.  This third case presents a thematically similar consideration.  Instead of a mistake or misapplication, though, this involves a misfit.

    Many tools out there offer purely cosmetic concerns.  They'll flag field variables not prepended with underscores or methods with camel casing instead of Pascal casing.  Assuming those jive with your team's standards, you have no issues.  But if they don't, you have two options: change the tool or change your standard.  Generally speaking, you probably want to err on the side of complying with broad standards.  But if your team is set with your standard, then turn off those warnings or configure the tool.

    When You're Buried in Warnings

    Speaking of warnings, I'll offer another point that relates to them, but with an entirely different theme.  When your team is buried in warnings, you need to take action.

    Before I talk about turning off warnings, however, consider fixing them en masse.  It may seem daunting, but I suspect that you might find yourself surprised at how quickly you can wrangle a manageable number.

    However, if this proves too difficult or time-consuming, consider force ranking the warnings, and (temporarily) turning off all except the top, say, 200.  Make it part of your team's work to eliminate those, and then enable the next 200.  Keep at it until you eliminate the warnings.  And remember, in this case, you're disabling warnings only temporarily.  Don't forget about them.

    When You Have an Intelligent Disagreement

    Last up comes the most perilous reason for turning off static analysis warnings.  This one also happens to occur most frequently, in my experience.  People turn them off because they know better than the static analysis tool.

    Let's stop for a moment and contemplate this.  Teams of workaday developers out there tend to blithely conclude that they know their business.  In fact, they know their business better than people whose job it is to write static analysis tools that generate these warnings.  Really?  Do you like those odds?

    Below the surface, disagreement with the tool often masks resentment at being called "wrong" or "non-compliant."  Turning the warnings off thus becomes a matter of pride or mild laziness.  Don't go this route.

    If you want to ignore warnings because you believe them to be wrong, do research first.  Only allow yourself to turn off warnings when you have a reasoned, intelligent, research-supported argument as to why you should do so.

    When in Doubt, Leave 'em On

    In this post, I have gingerly walked through scenarios in which you may want to turn off static analysis warnings and guidance.  For me, this exercise produces some discomfort because I rarely find this advisable.  My default instinct is thus not to encourage such behavior.

    That said, I cannot deny that you will encounter instances where this makes sense.  But whatever you do, avoid letting this become common or, worse, your default.  If you have the slightest bit of doubt, leave them on.   Put your trust in the vendors of these tools -- they know their business.  And steering you in bad directions is bad for business.

    Learn more how CodeIt.Right can automate your team standards, makes it easy to ignore specific guidance violations and keep track of them.

    About the Author

    Erik Dietrich

    I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

    posted on Wednesday, 19 October 2016 16:19:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Tuesday, 11 October 2016

    More years ago than I'd care to admit, I took a software engineering course as part of my graduate CS program.  At the time, I worked a full-time job during the day and did remote classes in the evening.  As a result, I disproportionately valued classes with applicability to my job.  And this class offered plenty of that.

    We scratched the surface on such diverse topics as agile methodologies, automated testing, cost of code ownership, and more.  But I found myself perhaps most interested by the dive we did into refactoring.  The idea of reworking the internal structure of code while preserving inputs and outputs is a surprisingly complex one.

    Historical Complexity of Refactoring

    At the risk of dating myself, I took this course in the fall of 2006.  While automated refactorings in your IDE now seem commonplace, back then, they were hard.  In fact, the professor of the course considered them to be sufficiently difficult as to steer a group of mine away from a project implementing some.  In the world of 2006, I suspect he had the right of it.  We steered clear.

    In 2016, implemented automated refactorings still present a challenge.  But modern tool and IDE vendors can stand on the shoulders of giants, so to speak.  Back then?  Not so much.

    Refactorings present a unique challenge to tool vendors because of the inherent risk.  They can really screw up users' code.  If a mistake happens, best case scenario is that the resultant code fails to compile because then, at least, it fails fast.  Worse still is semantically and syntactically correct code that somehow behaves improperly.  In this situation, a refactoring -- a safe change to code -- becomes a modification to the behavior of production code instead.  Ouch.

    On top of the risk, the implementation of refactoring anywhere beyond the trivial involves heady concepts such as abstract syntax trees.  In other words, it's not for lightweights.  So to recap, refactoring is risky and difficult.  And this is the landscape faced by tool authors.

    I Don't Fix -- I Just Flag

    If you live in the US, you may have seen a commercial that features a funny quip.  If I'm not mistaken, it advertises for some sort of fraud prevention services.  (Pardon any slight inaccuracies, as I recount this as best I can, from memory.)

    In the ad, bank robbers hold a bank hostage in a rather cliché, dramatic scene.  Off to the side, a woman stands near a security guard, asking him why he didn't do anything to stop it.  "I'm not a robbery prevention service -- I'm a robbery monitoring service.  Oh, by the way, there's a robbery." (here is a copy of the commercial)

    It brings a chuckle, but it also brings an underlying point.  In many situations, monitoring alone can prove woefully ineffective, prompting frustration.  As a former manager and current consultant, I generally advise people that they should only point out problems when they have also prepared proposed solutions.  It can mean the difference between complaining and solving.

    So you can imagine and probably share my frustration at tools that just flag problems and leave it to you to investigate further and fix them.  We feel like the woman standing next to the "robbery monitor," wondering how useful the service is to us.

    Levels of Solution

    Going back to the subject of software development, we see this dynamic in a number of places.  The compiler, the IDE, productivity add-ins, static analysis tools, and linting utilities all offer us warnings to heed.

    Often, that's all we get.  The utility says, "hey, something is wrong here, but you're going to have to figure out what."  I tend to think of that as the basic level of service, or level 0, if you will.

    The next level, level 1, involves at least offering some form of next action.  It might be as simple as offering a help file, inline reading, or a link to more information.  Anything above "this is a problem."

    Level 2 ups the ante by offering a recommendation for what to do next.  "You have a dependency cycle.  You should fix this by looking at these three components and removing one mutual dependency."  It goes beyond giving you a next thing to do and gives you the next thing to do.

    Level 3 rounds out the field by actually performing the action for you (following a prompt, of course).  "You've accidentally hidden a method on the parent class.  Click here to rename or click here to make parent virtual."  That's just an example off the top, of course, but it illustrates the interaction paradigm.  "We've noticed a problem, and you can click here to fix it."

    Fixes in Your Tooling

    blog-dont-just-flag-it-fix-it-irWhen evaluating your own tools, look to climb as high up this hierarchy as you can.  Favor tools that identify problems, but offer fixes whenever possible.

    There are a number of such tools out there, including CodeIt.Right.  Using tools like this is a pleasure because it removes the burden of research and implementation from you.  Well, you can always do the research if you want, but at your own leisure.  But it's much better to do research at your leisure than when you're trying to accomplish something else.

    The other, important concern here is that you find trusted tooling to help you with this sort of thing.  After all, you don't want something messing with your source code if it might mess up your source code.  But, assuming you can trust it, this provides an invaluable boost to your effectiveness by automatically resolving your problems and by helping you learn.

    In the year 2016, we have far more tooling available, with a far better track record, than we did in 2006.  Leverage it whenever possible so that you can focus on solving the pressing problems of your day to day work.

    Tools at your disposal

    SubMain offers CodeIt.Right that easily integrates into Visual Studio for flexible and intuitive "We've noticed a problem, and you can click here to fix it." solution.

    Learn more how CodeIt.Right can automate your team standards and improve code quality.

    About the Author

    Erik Dietrich

    I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

    posted on Tuesday, 11 October 2016 08:41:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Thursday, 29 September 2016

    In professional contexts, I think that the word "standard" has two distinct flavors.  So when we talk about a "team standard" or a "coding standard," the waters muddy a bit.  In this post, I'm going to make the case for a team standard.  But before I do, I think it important to discuss these flavors that I mention.  And keep in mind that we're not talking dictionary definition as much as the feelings that the word evokes.

    blog-case-for-team-standardFirst, consider standard as "common."  To understand what I mean, let's talk cars.  If you go to buy a car, you can have an automatic transmission or a standard transmission.  Standard represents a weird naming choice for this distinction since (1) automatic transmissions dominate (at least in the US) and (2) "manual" or "stick-shift" offer much better descriptions.  But it's called "standard" because of historical context.  Once upon a time, automatic was a new sort of upgrade, so the existing, default option became boringly known as "standard."

    In contrast, consider standard as "discerning."  Most commonly you hear this in the context of having standards.  If some leering, creepy person suggested you go out on a date to a fast food restaurant, you might rejoin with, "ugh, no, I have standards."

    Now, take these common contexts for the word to the software team room.  When someone proposes coding standards, the two flavors make themselves plain in the team members' reactions.  Some like the idea, and think, "it's important to have standards and take pride in our work."  Others hear, "check your creativity at the gate, because around here we write standard, default code."

    What I Mean by Standard

    Now that I've drawn the appropriate distinction, I feel it appropriate to make my case.  When I talk about the importance of a standard, I speak with the second flavor of the word in mind.  I speak about the team looking at its code with a discerning attitude.  Not just any code can make it in here -- we have standards.

    These can take somewhat fluid forms, and I don't mean to be prescriptive.  The sorts of standards that I like to see apply to design principles as much as possible and to cosmetic concerns only when they have to.

    For example, "all non-GUI code should be test driven" and "methods with more than 20 lines should require a conversation to justify them" represent the sort of standards I like my teams to have.  They say, "we believe in TDD" and "we view long methods as code smells," respectively.  In a way, they represent the coding ethos of the group.

    On the other side of the fence lie prescriptions like, "all class fields shall be prepended with underscores" and "all methods shall be camel case."  I consider such concerns cosmetic, since they are appearance and not design or runtime behavior.  Cosmetic concerns are not important... unless they are.  If the team struggles to read code and becomes confused because of inconsistency, then such concerns become important.  If the occasional quirk presents no serious readability issues, then prescriptive declarations about it stifle more than they help.

    Having standards for your team's work product does not mean mandating total homogeneity.

    Why Have a Standard at All?

    Since I'm alluding to the potentially stifling effects of a team standard, you might reasonably ask why we should have them at all.  I can assert that I'm interested in the team being discerning, but is it really just about defining defaults?  Fair enough.  I'll make my case.

    First, consider something that I've already mentioned: maintenance.  If the team can easily read code, it can more easily maintain that code.  Logically, then, if the team all writes fairly similar code, they will all have an easier time reading, and thus maintaining that code.  A standard serves to nudge teams in this direction.

    Another important benefit of the team standard revolves around the integrity of the work product.  Many team's standards incorporate methodology for security, error handling, logging, etc.  Thus the established standard arms the team members with ways to ensure that the software behaves properly.

    And finally, well-done standards can help less experienced team members learn their craft.  When such people join the team, they tend to look to established folks for guidance.  Sadly, those people often have the most on their plate and the least time.  The standard can thus serve as teacher by proxy, letting everyone know the team's expectations for good code.

    Forget the Conformity (by Automating)

    So far, all of my rationale follows a fairly happy path.  Adopt a team standard, and reap the rewards: maintainability, better software, learning for newbies.  But equally important is avoiding the dark side of team standards.  Often this dark side takes the form of nitpicking, micromanagement and other petty bits of nastiness.

    Please, please, please remember that a standard should not elevate conformity as a virtue.  It should represent shared values and protection of work product quality.  Therefore, in situations where conformity (uniformity) is justified, you should automate it.  Don't make your collaborative time about telling people where to put spaces and brackets -- program your IDE to do that for you.

    Make Justification Part of the Standard

    Another critical way to remove the authoritarian vibe from the team standard is one that I rarely see.  And that mystifies me a bit because you can do it so easily.  Simply make sure you justify each item contained in the standard.

    "Methods with more than 20 line of code should prompt a conversation," might find a home in your standard.  But why not make it, "methods with more than 20 lines of code should prompt a conversation because studies have demonstrated that defect rate increases more than linearly with lines of code per method?"  Wow, talk about powerful.

    This little addition takes the authoritarian air out of the standard, and it also helps defuse squabbles.  And, best of all, people might just learn something.

    If you start doing this, you might also notice that boilerplate items in a lot of team standards become harder to justify.  "Prepend your class fields with m underscore" becomes "prepend your class fields with m underscore because... wait, why do we do that again?"

    Prune and Always Improve

    When you find yourself trailing off at because, you have a problem.  Something exists in your team standard that you can't justify.  If no one can justify it, then rip it out.  Seriously, get rid of it.  Having items that no one can justify starts to put you in conformity for the sake of conformity territory.  And that's when standard goes from "discerning" to "boring."

    Let this philosophy guide your standard in general.  Revisit it frequently, and audit it for valid justifications.  Sometimes justifications will age out of existence or seem lame in retrospect.  When this happens, do not hesitate to revisit, amend, or cull.  The best team standards are neither boring nor static.  The best team standards reflect the evolving, growing philosophy of the team.

    Related resources

    Tools at your disposal

    SubMain offers CodeIt.Right that easily integrates into Visual Studio for flexible and intuitive automated code review solution that works real-time, on demand, at the source control check-in or as part of your build.

    Learn more how CodeIt.Right can automate your team standards and improve code quality.

    About the Author

    Erik Dietrich

    I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

    posted on Thursday, 29 September 2016 07:41:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Tuesday, 20 September 2016

    If you write software, the term "feedback loop" might have made its way into your vocabulary.  It charts a slightly indirect route from its conception and into the developer lexicon, though, so let's start with the term's origin.  A feedback loop in general systems uses its output as one of its inputs.

    Kind of vague, huh?  I'll clarify with an example.  I'm actually writing this post from a hotel room, so I can see the air conditioner from my seat.  Charlotte, North Carolina, my temporary home, boasts some pretty steamy weather this time of year, so I'm giving the machine a workout.  Its LED display reads 70 Fahrenheit, and it's cranking to make that happen.

    When the AC unit hits exactly 70 degrees, as measured by its thermostat, it will take a break.  But as soon as the thermostat starts inching toward 71, it will turn itself back on and start working again.  Such is the Sisyphean struggle of climate control.

    Important for us here, though, is the mechanics of this system.  The AC unit alters the temperature in the room (its output).  But it also uses the temperature in the room as input (if < 71, do nothing, else cool the room).  Climate control in buildings operates via feedback loop.

    Appropriating the Term for Software Development

    It takes a bit of a cognitive leap to think of your own tradecraft in terms of feedback loops.  Most likely this happens because you become part of the system.  Most people find it harder to reason about things from within.

    In software development, you complete the loop.  You write code, the compiler builds it, the OS runs it, you observe the result, and decide what to do to the code next.  The output of that system becomes the input to drive the next round.

    If you have heard the term before, you've probably also heard the term "tightening the feedback loop."  Whether or not you've heard it, what people mean by this is reducing the cycle time of the aforementioned system.  People throwing that term around look to streamline the write->build->run->write again process.

    A History of Developer Feedback Loops

    At the risk of sounding like a grizzled old codger, let me digress for a moment to talk about feedback loop history.  Long before my time came the punched card era.  Without belaboring the point, I'll say that this feedback loop would astound you, the modern software developer.

    Programmers would sit at key punch "kiosks", used to physically perforate forms (one mistake, and you'd start over).  They would then take these forms and have operators turn them into cards, stacks of which they would hold onto.  Next, they'd wait in line to feed these cards into the machines, which acted as a runtime interpreter.   Often, they would have to wait up to 24 hours to see the output of what they had done.

    Can you imagine?  Write a bit of code, then wait for 24 hours to see if it worked.  With a feedback loop this loose, you can bet that checking and re-checking steps received hyper-optimization.

    blog-developer-feedback-loop

    When I went to college and started my programming career, these days had long passed.  But that doesn't mean my early days didn't involve a good bit of downtime.  I can recall modifying C files in projects I worked, and then waiting up to an hour for the code to build and run, depending what I had changed.  xkcd immortalized this issue nearly 10 years ago, in one of its most popular comics.

    Today, you don't see this as much, though certainly, you could find some legacy codebases or juggernauts that took a while to build.  Tooling, technique, modern hardware and architectural approaches all combine to minimize this problem via tighter feedback loops.

    The Worst Feedback Loop

    I have a hypothesis.  I believe that a specific amount of time exists for each person that represents the absolute, least-optimal amount of time for work feedback.  For me, it's about 40 seconds.

    If I make some changes to something and see immediate results, then great.  Beyond immediacy, my impatience kicks in.  I stare at the thing, I tap impatiently, I might even hit it a little, knowing no good will come.  But after about 40 seconds, I simply switch my attention elsewhere.

    Now, if I know the wait time will be longer than 40 seconds, I may develop some plan.  I might pipeline my work, or carve out some other tasks with which I can be productive while waiting.  If for instance, I can get feedback on something every 10 minutes, I'll kick it off, do some household chores, periodically checking on it.

    But, at 40 seconds, it resides in some kind of middle limbo, preventing any semblance of productivity.  I kick it off and check twitter.  40 seconds turns into 5 minutes when someone posts a link to some cool astronomy site.  I check back, forget what I did, and then remember.  I try again and wait 40 seconds.  This time, I look at a Buzzfeed article and waste 10 minutes as that turns into 4 Buzzfeed articles.  I then hate myself.

    The Importance of Tightening

    Why do I offer this story about my most sub-optimal feedback period?  To demonstrate the importance of diligence in tightening the loop.  Wasting a few seconds while waiting hinders you.  But waiting enough seconds to distract you with other things slaughters your productivity.

    With software development, you can get into a state of what I've heard described as "flow."  In a state of flow, the feedback loop creates harmony in what you're doing.  You make adjustments, get quick feedback, feel encouraged and productive, which promotes more concentration, more feedback, and more productivity.  You discover a virtuous circle.

    But just the slightest dropoff in the loop pops that bubble.  And, another dropoff from there (e.g. to 40 seconds for me) can render you borderline-useless.  So much of your professional performance rides on keeping the loop tight.

    Tighten Your Loop Further

    Modern tooling offers so many options for you.  Many IDEs will perform speculative compilation or interpretation as you code, making builds much faster.  GUI components can be rendered as you work, allowing you to see changes in real time as you alter the markup.  Unit tests slice your code into discrete, separately evaluated components, and continuous testing tools provide pass/fail feedback as you type.  Static code analysis tools offer you code review as you work, rather than at some code review days later.  I could go on.

    The general idea here is that you should constantly seek ways to tune your day to day work.  Keep your eyes out for tools that speed up your feedback loop.  Read blogs and go to user groups.  Watch your coworkers for tips and tricks.  Claw, scratch, and grapple your way to shaving time off of your feedback loop.

    We've come a long way from punch cards and sword fights while code compiles.  But, in 10 or 30 years, we'll look back in amazement at how archaic our current techniques seem.  Put yourself at the forefront of that curve, and you'll distinguish yourself as a developer.

    Learn more how CodeIt.Right can tighten the feedback loop and improve your code quality.

    About the Author

    Erik Dietrich

    I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

    posted on Tuesday, 20 September 2016 07:37:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Wednesday, 24 August 2016

    In the world of programming, 15 years or so of professional experience makes me a grizzled veteran.  That certainly does not hold for the work force in general, but youth dominates our industry via the absolute explosion of demand for new programmers.  Given the tendency of developers to move around between projects and companies, 15 years have shown me a great deal of variety.

    Perhaps nothing has exemplified this variety more than the code review.  I've participated in code reviews that were grueling, depressing marathons.  On the flip side, I've participated in ones where I learned things that would prove valuable to my career.  And I've seen just about everything in between.

    Our industry has come to accept that peer review works.  In the book Code Complete, author Steve McConnell cites it, in some circumstance, as the single most effective technique for avoiding defects.  And, of course, it helps with knowledge transfer and learning.  But here's the rub -- implemented poorly, it can also do a lot of harm.

    Today, I'd like to make the case for the automated code review.  Let me be clear.  I do not view this as a replacement for any manual code review, but as a supplement and another tool in the tool chest.  But I will say that automated code review carries less risk than its manual counterpart of having negative consequences.

    The Politics

    I mentioned extremely productive code reviews.  For me, this occurred when working on a team with those I considered friends.  I solicited opinions, got earnest feedback, and learned.  It felt like a group of people working to get better, and that seemed to have no downside.

    But I've seen the opposite, too.  I've worked in environments where the air seemed politically charged and competitive.  Code reviews became religious wars, turf battles, and arguments over minutiae.  Morale dipped, and some people went out of their way to find ways not to participate.  Clearly no one would view this as a productive situation.

    With automated code review, no politics exist.  Your review tool is, of course, incapable of playing politics.  It simply carries out its mission on your behalf.  Automating parts of the code review process -- especially something relatively arbitrary such as coding standards compliance -- can give a team many fewer opportunities to posture and bicker.

    Learning May Be Easier

    As an interpersonal activity, code review carries some social risk.  If we make a silly mistake, we worry that our peers will think less of us.  This dynamic is mitigated in environments with a high trust factor, but it exists nonetheless.  In more toxic environments, it dominates.

    Having an automated code review tool creates an opportunity for consequence-free learning.  Just as the tool plays no politics, it offers no judgment.  It just provides feedback, quietly and anonymously.

    Even in teams with a supportive dynamic, shy or nervous folks may prefer this paradigm.  I'd imagine that anyone would, to an extent.  An automated code review tool points out mistakes via a fast feedback loop and offers consequence-free opportunity to correct them and learn.

    Catching Everything

    So far I've discussed ways to cut down on politics and soothe morale, but practical concerns also bear mentioning.  An automated code review tool necessarily lacks the judgment that a human has.  But it has more thoroughness.

    If your team only performs peer review as a check, it will certainly catch mistakes and design problems.  But will it catch all of them?  Or is it possible that you might miss one possible null dereference or an empty catch block?  If you automate the process, then the answer becomes "no, it is not possible."

    For the items in a code review that you can automate, you should, for the sake of thoroughness.

    Saving Resources and Effort

    Human code review requires time and resources.  The team must book a room, coordinate schedules, use a projector (presumably), and assemble in the same location.  Of course, allowing for remote, asynchronous code review mitigates this somewhat, but it can't eliminate the salary dollars spent on the activity.  However you slice it, code review represents an investment.

    In this sense, automating parts of the code review process has a straightforward business component.  Whenever possible and economical, save yourself manual labor through automation.

    When there are code quality and practice checks that can be done automatically, do them automatically.  And it might surprise you to learn just how many such things can be automated.

    Improbable as it may seem, I have sat in code reviews where people argued about whether or not a method would exhibit a runtime behavior, given certain inputs.  "Why not write a unit test with those inputs," I asked.  Nobody benefits from humans reasoning about something the build, the test suite, the compiler, or a static analysis tool could tell them automatically.

    Complimentary Approach

    As I've mentioned throughout this post, automated code review and manual code review do not directly compete.  Humans solve some problems better than machines, and vice-versa.  To achieve the best of all worlds, you need to create a complimentary code review approach.

    First, understand what can be automated, or, at least, develop a good working framework for guessing.  Coding standard compliance, for instance, is a no-brainer from an automation perspective.  You do not need to pay humans to figure out whether variable names are properly cased, so let a review tool do it for you.  You can learn more about the possibilities by simply downloading and trying out review and analysis tools.

    Secondly, socialize the tooling with the team so that they understand the distinction as well.  Encourage them not to waste time making a code review a matter of checking things off of a list.  Instead, manual code review should focus on architectural and practice considerations.  Could this class have fewer responsibilities?  Is the builder pattern a good fit here?  Are we concerned about too many dependencies?

    Finally, I'll offer the advice that you can use the balance between manual and automated review based on the team's morale.  Do they suffer from code review fatigue?  Have you noticed them sniping a lot?  If so, perhaps lean more heavily on automated review.  Otherwise, use the automated review tools simply to save time on things that can be automated.

    If you're currently not using any automated analysis tools, I cannot overstate how important it is that you check them out.  Our industry built itself entirely on the premise of automating time-consuming manual activities.  We need to eat our own dog food.

    Related resources

    Tools at your disposal

    SubMain offers CodeIt.Right that easily integrates into Visual Studio for flexible and intuitive automated code review solution that works real-time, on demand, at the source control check-in or as part of your build.

    Learn more how CodeIt.Right can help with automated code review and improve your code quality.

    About the Author

    Erik Dietrich

    I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

    posted on Wednesday, 24 August 2016 14:06:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     

     
         
     
    Home |  Products |  Services |  Download |  Purchase |  Support |  Community |  About Us |