Have a question? Email Us or Call 1 (800) 936-2134
SubMain - CodeIt.Right The First Time! Home Products Services Download Purchase Support Community
 Thursday, 27 April 2017

Many of us have a natural tendency to let little things pile up.  This gives rise to the notion of the so-called spring cleaning.  The weather turns warm and going outside becomes reasonable, so we take the opportunity to do some kind of deep cleaning. blog-spring-cleaning-your-code-review

Of course, this may not apply to you.  Perhaps you keep your house impeccable at all times, or maybe you simply have a cleaning service.  But I'll bet that, in some part of your life or another, you put little things off until they become bigger things.  Your cruft may not involve dusty shelves and pockets of house clutter, but it probably exists somewhere.

Maybe it exists in your professional life in some capacity.  Perhaps you have a string of half written blog posts, or your inbox has more than a thousand messages.  And, if you examine things honestly, you almost certainly have some item that has been skulking around your to-do list for months.  Somewhere, we all have items that could use some tidying, cognitive or physical.

With that in mind, I'd like to talk about your code review process.  Have you been executing it like clockwork for months or years?  Perhaps it has become too much like clockwork.  Turn a critical eye to it, and you might realize elements of it have become stale or superfluous.  So let's take a look at how you can apply a spring cleaning to your code review process.

Beware The Cargo Cult

During World War II, the Allies set up a temporary air base on an island in the Pacific Ocean.  The people living on the Island observed the ground controllers waving at inbound planes to help them land.  Supplies then followed.  Not understand the purpose of this ritual or the mechanics of airplanes, the locals learned that making these motions brought planes with supplies.  So after the allies left, they mimicked the behavior, hoping for additional resources.  This execution of ritual without understanding earned the designation "cargo cult."

In the world of software development, cargo cult programming involves adding code without understanding what it does.  You added it once, good things happened, so now you always add it.  You can think of this as a special case of programming by coincidence.  And it's something you should avoid.

But cargo cult mentality can crop up in a code review as well.  Do you find your team calling out 'issues' during the review, but, if pressed, nobody could articulate why those are issues?  If so, you have a cargo cult practice, and you should cull it.

Going Over the Same Stuff Repetitively

Let's say that your team performs code review on a regular basis.  Does this involve an ongoing, constant uplift?  In other words, do you find learning spreads among the team, and you collectively sharpen your game and constantly improve?  Or do you find that the team calls out the same old issues again and again?

If every code review involves noticing a method parameter dereference and saying, "you'll get an exception if someone passes in null," then you have stagnation.  Think of this as a team smell.  Why do people keep making the same mistake over and over again?  Why haven't you somehow operationalized a remedy?  And, couldn't someone have automated this?

Keep an eye out for this sort of thing.  If you notice it, pause and do some root cause analysis.  Don't just fix the issue itself -- fix it so the issue stops happening.

Inconsistency in Reviews

Another common source of woe arises from inconsistency in the code review process.  Not only does this result in potential issues within the code, but it also threatens to demoralize members of the team.  Imagine attending a review and having someone admonish you to add logging calls to all of your methods.  But then, during the next review, someone gives you a hard time about logging too much.  Enough of that nonsense and team members start updating their resumes rather than their methods.

And inconsistency can mean more than just different review styles from different people (or the same person on different days, varying by mood).  You might find that your team's behavior and suggestions during review have become out of sync with a formal document like the team's coding standard.  Whatever the source, inconsistency creates drag for your team.

Take the opportunity of a metaphorical spring cleaning to address this potential pitfall.  Round up the team members and make sure they all have the same philosophies at code review time.  And then, make sure that unified philosophy lines up with anything documented.

Cut Out the Nitpicking

I've yet to see an organization where interpersonal code review didn't become at least a little political.  That makes sense, of course.  In essence, you're talking about an activity where people get together and offer (hopefully) constructive professional criticism.

Because of the politics, personal code review can degenerate and lead to infighting in numerous ways.  Chief among these, I've found, is excessive nitpicking.  If team members perceive the activity as a never ending string of officious criticism, they start to hate coming to work.

On top of that, people can only internalize so many lessons in a sitting.  After a while, they start to tune out or get tired.  So make the takeaways from the code review count.  Even if they haven't gotten every little thing just so, pick your battles and focus on big things.  And I file this under spring cleaning since it generally requires a concerted mental adjustment and since it will clear some of the cruft out of your review.

Automate, Automate, Automate

I will conclude by offering what I consider the most important item for any code review spring cleaning.  If the other suggestions in involved metaphorical shelf dusting and shower scrubbing, think of this one as completely cleaning out an entire room that you had loaded with junk.

So much of the time teams spend in code review seems to trend toward picking at nits.  But even when it involves more substantive considerations, many of these considerations could be automatically detected.  The team wastes precious time peering at the code and playing static analyzer.  Stop this!

Spruce up your review process by automating as much of it as humanly possible.  You should constantly ask yourself if the issue you're discussing could be automatically detected (and fixed).  If you think it could, then do it.  And, as part of your spring cleaning, knock out as many of these as possible.

Save human-centric code review for focus on design considerations, architectural discussions, and big picture issues.  Don't bog yourself down in cruft.  You'll all feel a lot cleaner and happier for it, just as you would after any spring cleaning.

Tools at your disposal

SubMain offers CodeIt.Right that easily integrates into Visual Studio for flexible and intuitive automated code review solution that works real-time, on demand, at the source control check-in or as part of your build.

Related resources

Learn more how CodeIt.Right can help you automate code reviews and improve the quality of your code.

About the Author

Erik Dietrich

I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

posted on Thursday, 27 April 2017 07:10:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
 Wednesday, 19 April 2017

Today, I'll do another installment of the CodeIt.Right Rules, Explained series.  This is post number five in the series.  And, as always, I'll start off by citing my two personal rules about static analysis guidance, along with the explanation for them.

  • Never implement a suggested fix without knowing what makes it a fix.
  • Never ignore a suggested fix without understanding what makes it a fix.

It may seem as though I'm playing rhetorical games here.  After all, I could simply say, "learn the reasoning behind all suggested fixes."  But I want to underscore the decision you face when confronted with static analysis feedback.  In all cases, you must actively choose to ignore the feedback or address it.  And for both options, you need to understand the logic behind the suggestion.

In that spirit, I'm going to offer up explanations for three more CodeIt.Right rules today.

Mark ISerializable Types with "Serializable" Attribute

If you run across this rule, you might do so while writing an exception class.  For example, the following small bit of code in a project of mine triggers it.

public class GithubQueryingException : Exception
{
    public GithubQueryingException(string message, Exception ex) : base(message, ex)
    {
            
    }
}

It seems pretty innocuous, right?  Well, let's take a look at what went wrong.

The rule actually describes its own solution pretty well.  Slap a serializable attribute on this exception class and make the tool happy.  But who cares?  Why does it matter if you don't mark the exception as serializable?

To understand the issue, you need awareness of a concept called "application domains" within the .NET framework.  Going into much detail about this would take us beyond the scope of the post.  But suffice it to say, "application domains provide an isolation boundary for security, reliability, and versioning, and for unloading assemblies."  Think two separate processes running and collaborating.

If some external process will call your code, it won't access and deal with your objects the same way that your own code will.  Instead, it needs to communicate by serializing the object and passing it along as if over some remote service call.  In the case of the exception above, it lacks the attribute marking it explicitly as serializable, in spite of implementing that interface.  So bad things will happen at runtime.  And this warning exists to give you the heads up.

If you'll only ever handle this exception within the same app domain, it won't cause you any heartburn.  But, then again, neither will adding an attribute to your class.

Do Not Handle Non-CLS-Compliant Exceptions

Have you ever written code that looks something like this?

try
{
    DoSomething();
    return true;
}
catch
{
    return false;
}

In essence, you want to take a stab at doing something and return true if it goes well and false if anything goes wrong.  So you write code that looks something like the above.

If you you have, you'll run afoul of the CodeIt.Right rule, "do not handle non-cls-compliant exceptions."  You might find this confusing at first blush, particularly if you code exclusively in C# or Visual Basic.  This would confuse you because you cannot throw exceptions not compliant with the common language specification (CLS).  All exceptions you throw inherit from the Exception class and thus conform.

However, in the case of native code written in, say, C++, you can actually throw non-CLS-compliant exceptions.  And this code will catch them because you've said "catch anything that comes my way."  This earns you a warning.

The CodeIt.Right warning here resembles one telling you not to catch the general exception type.  You want to be intentional about what exceptions you trap, rather than casting an overly wide net.  You can fix this easily enough by specifying the actual exception you anticipate might occur.

Async Methods Should Return Task or Task<T>

As of .NET Framework 4.5, you can use the async keyword to allow invocation of an asynchronous operation.  For example, imagine that you had a desktop GUI app and you wanted to populate a form with data.  But imagine that acquiring said data involved doing an expensive and time consuming call over a network.

With synchronous programming, the call out to the network would block, meaning that everything else would grind to a halt to wait on the network call... including the GUI's responsiveness.  That makes for a terrible user experience.  Of course, we solved this problem long before the existence of the async keyword.  But we used laborious threading solutions to do that, whereas the async keyword makes this more intuitive.

Roughly speaking, designating a method as "async" indicates that you can dispatch it to conduct its business while you move on to do other things.  To accomplish this, the method synchronously returns something called a Task, which acts as a placeholder and a promise of sorts.  The calling method keeps a reference to the Task and can use it to get at the result of the method, once the asynchronous operation completes.

But that only works if you return a Task or Task<T>.  If, instead, you create a void method and label it asynchronous, you have no means to get at it later and no means to explicitly wait on it.  There's a good chance this isn't what you want to do, and CodeIt.Right lets you know that.  In the case of an event handler, you might actually want to do this, but better safe than sorry.  You can fix the violation by returning a non-parameterized Task rather than declaring the method void.

Until Next Time

This post covered some interesting language and framework features.  We looked at the effect of crossing app domain boundaries and what that does to the objects whose structure you can easily take for granted.  Then we went off the beaten path a little by looking at something unexpected that can happen at the intersection of managed and native code.  And, finally, we delved into asynchronous programming a bit.

As we wander through some of these relatively far-reaching concerns, it's nice to see that CodeIt.Right helps us keep track.  A good analysis tool not only helps you catch mistakes, but it also helps you expand your understanding of the language and framework.

Learn more how CodeIt.Right can help you automate code reviews and improve your code quality.

About the Author

Erik Dietrich

I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

posted on Wednesday, 19 April 2017 12:35:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
 Wednesday, 12 April 2017

I like variety.  In pursuit of this preference, I spend some time management consulting with enterprise clients and some time volunteering for "office hours" at a startup incubator.  Generally, this amounts to serving as "rent-a-CTO" for startup founders in half hour blocks.  This provides me with the spice of life, I guess.

As disparate as these advice forums might seem, they often share a common theme.  Both in the impressive enterprise buildings and the startup incubator conference rooms, people ask me about offshoring application development.  To go overseas or not to go overseas?  That, quite frequently, is the question (posed to me).

I find this pretty difficult to answer absent additional information.  In any context, people asking this bake two core assumptions into their question.  What they really want to say would sound more like this.  "Will I suffer for the choice to sacrifice quality to save money?"

They assume first that cheaper offshore work means lower quality.  And then they assume that you can trade quality for cost as if adjusting the volume dial in your car.  If only life worked this simply.

What You Know When You Offshore

Before going further, let's back up a bit.  I want to talk about what you actually know when you make the decision to pay overseas firms a lower rate to build software.  But first, let's dispel these assumptions that nobody can really justify.

Understand something unequivocally.  You cannot simply exchange units of "quality" for currency.  If you ask me to build you a web app, and I tell you that I'll do it for $30,000, you can't simply say, "I'll give you $15,000 to build one-half as good."  I mean, you could say that.  But you'd be saying something absurd, and you know it.  You can reasonably adjust cost by cutting scope, but not by assuming that "half as good" means "twice as fast."

Also, you need to understand that "cheap overseas labor" doesn't necessarily mean lower quality.  Frequently it does, but not always.  And, not even frequently enough that you can just bank on it.

So what do you know when you contract with an inexpensive, overseas provider?  Not a lot, actually.  But you do know that your partner will work with you mainly remotely, across a great deal of distance, and with significant communication obstacles.  You will not collaborate as closely with them as you would with an employee or a local vendor.

The (Non) Locality Conundrum

So you have a limited budget, and you go shopping for offshore app dev.  You go in knowing that you may deal with less skilled developers.  But honestly, most people dramatically overestimate the importance of that concern.

What tends to torpedo these projects lies more in the communication gulf and less in the skill.  You give them wireframes and vague instructions, and they come back with what they think you want.  They explain their deliveries with passable English in emails sent at 2:30 AM your time.  This collaboration proves taxing for both parties, so you both avoid it, for the most part.  You thus mutually collude to raise the stakes with each passing week.

Disaster then strikes at the end.  In a big bang, they deliver what they think you want, and it doesn't fit your expectations.  Or it fits your expectations, but you can't build on top of it.  You may later, using some revisionist history, consider this a matter of "software quality" but that misses the point.

Your problem really lies in the non-locality, both geographically and more philosophically.

When Software Projects Work

Software projects work well with a tight feedback loop.  The entire agile movement rests firmly atop this premise.  Stop shipping software once per year, and start shipping it once per week.  See what the customer/stakeholder thinks and course correct before it's too late.  This helps facilitate success far more than the vague notion of quality.

The locality issue detracts from the willingness to collaborate.  It encourages you to work in silos and save feedback for a later date.  It invites disaster.

To avoid this, you need to figure out a way to remove unknowns from the equation.  You need to know what your partner is doing from week to week.  And you need to know the nature of what they're building.  Have they assembled throwaway, prototype code?  Or do you have the foundation of the future?

Getting a Glimpse

At this point, the course for enterprises and startups diverge.  The enterprise has legions of software developers and can easily afford to fly to Eastern Europe or Southeast Asia or wherever the work gets done.  They want to leverage economies of scale to save money as a matter of policy.

The startup or small business, on the other hand, lacks these resources.  They can't just ask their legion of developers to review the offshore work more frequently.  And they certainly can't book a few business class tickets over there to check it out for themselves.  They need to get more creative.

In fact, some of the startup founders I counsel have a pretty bleak outlook here.  They have no one in their organization in a position to review code at all.  So they rely on an offshore partner for budget reasons, and they rely on that partner as expert adviser and service provider.  They put all of their eggs in that vendor's basket.  And they come to me asking, "have I made a good choice?"

They need a glimpse into what these offshore folks are doing, and one that they can understand.

Leveraging Automated Code Review

While you can't address the nebulous, subjective concept of "quality" wholesale, you can ascertain properties of code.  And you can even do it without a great deal of technical knowledge, yourself.  You could simply take their source code and run an automated code review on it.

You're probably thinking that this seems a bit reductionist.  Make no mistake -- it's quite reductionist.  But it also beats no feedback at all.

You could approach this by running the review on each incremental delivery.  Ask them to explain instances where it runs afoul of the tool.  Then keep doing it to see if they improve.  Or, you could ask them to incorporate the tool into their own process and make delivering issue-free code a part of the contract.  Neither of these things guarantees a successful result.  But at least it offers you something -- anything -- to help you evaluate the work, short of in-depth knowledge and study yourself.

Recall what I said earlier about how enterprises regard quality.  It's not as much about intrinsic properties, nor is it inversely proportional to cost.  Instead, quality shows itself in the presence of a tight feedback loop and the ability to sustain adding more and more capabilities.  With limited time and knowledge, automated code review gives you a way to tighten that feedback loop and align expectations.  It ensures at least some oversight, and it aligns the work they do with what you might expect from firms that know their business.

Tools at your disposal

SubMain offers CodeIt.Right that easily integrates into Visual Studio for flexible and intuitive automated code review solution that works real-time, on demand, at the source control check-in or as part of your build.

Related resources

Learn more how CodeIt.Right can help you automate code reviews and ensure the quality of delivered code.

About the Author

Erik Dietrich

I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

posted on Wednesday, 12 April 2017 12:06:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
 Wednesday, 05 April 2017

I can almost sense the indignation from some of you.  You read the title and then began to seethe a little.  Then you clicked the link to see what kind sophistry awaited you.  "There is no substitute for peer review."

Relax.  I agree with you.  In fact, I think that any robust review process should include a healthy amount of human and automated review.  And, of course, you also need your test pyramid, integration and deployment strategies, and the whole nine yards.  Having a truly mature software shop takes a great deal of work and involves standing on the shoulders of giants.  So, please, give me a little latitude with the premise of the post.

Today I want to talk about how one could replace manual code review with automated code review only, should the need arise.

Why Would The Need for This Arise?

You might struggle to imagine why this would ever prove necessary.  Those of you with many years logged in the enterprise in particular probably find this puzzling.  But you might find manual code inspection axed from your process for any number of reasons other than, "we've decided we don't value the activity."

First and most egregiously, a team's manager might come along with an eye toward cost savings.  "I need you to spend less time reading code and more time writing it!"  In that case, you'll need to move away from the practice, and going toward automation beats abandoning it altogether.  Of course, if that happens, I also recommend dusting off your resume.  In the first place, you have a penny-wise, pound-foolish manager.  And, secondly, management shouldn't micromanage you at this level.  Figuring out how to deliver good software should be your responsibility.

But let's consider less unfortunate situations.  Perhaps you currently work in a team of 2, and number 2 just handed in her two weeks’ notice.  Even if your organization back-fills your erstwhile teammate, you have some time before the newbie can meaningfully review your code.  Or, perhaps you work for a larger team, but everyone gradually becomes so busy and fragmented in responsibility as not to have the time for much manual peer review.

In my travels, this last case actually happens pretty frequently.  And then you have to choose: abandon the practice altogether, or move toward an automated version.  Pretty easy choice, if you ask me.

First, Take Inventory

Assuming no one has yet forced your hand, pause to take inventory.  What currently happens as part of your review process?  What sorts of feedback do you get?

If your reviews happen in some kind of asynchronous format, then great.  This should prove easy enough to capture since you'll need only to go through your emails or issues list or whatever you use.  Do you have in-person reviews, but chronicle the findings?  Just as good for our purposes here.

But if these reviews happen in more ad hoc fashion, then you have some work to do.  Start documenting the feedback and resultant action items.  After all, in order to create a suitable replacement strategy for an activity, you must first thoroughly understand that activity.

Automate the Automate-able

With your list in place, you can now start figuring out how to replace your expiring manual process.  First up, identify the things you can easily automate that come up during reviews.

This will include cosmetic concerns.  Does your code comply with the team standard?  Does it comply with typical styling for your tech stack?  Have you consistently cased and named things?  If that stuff comes up during your reviews, you should probably automate it anyway and not waste time discussing it.  But, going forward, you will need to automate it.

But you should also look for anything that you can leverage automation to catch.  Do you talk about methods getting too long or about not checking parameters for null before dereferencing?  You can also automate things like that.  How about compliance with non-cosmetic best practices?  Automate that as well with an automated code review tool.

And spend some time researching what you can automate.  Even if no analyzer or review tool catches something out of the box, you can often customize them to catch it (or write your own thing, if needed).

Checks and Balances for Conceptual Items

Now, we move onto the more difficult things.  "This method seems pretty unreadable."  "Couldn't you use the builder pattern here?"  I'm talking here about the sorts of things for which manual code review really shines and serves its purpose.  You'll have a harder time replacing this.  But that doesn't mean you can't do something.

First, I recommend that you audit the review history you've been compiling.  See what comes up the most frequently, and make a list of those things.  And group them conceptually.  If you see a lot of "couldn't you use Builder" and "couldn't you use Factory Method," then generalize to "couldn't you use a design pattern?"

Once you have this list, if nothing else, you can use it as a checklist for yourself each time you commit code.  But you might also see whether you can conceive of some sort of automation.  Or maybe you just resolve to revisit the codebase periodically, with a critical eye toward these sorts of questions.

You need to see if you can replace the human insights of a peer.  Admittedly, this presents a serious challenge.  But get creative and see what you can come up with.

Adjust Your Approach

The final plank I'll mention involves changing the way you approach development and review in general.  For whatever reason, human review of your work has become a scarce resource.  You need to adjust accordingly.

Picking up a good bit of automated review makes up part of this adjustment, as does creating of a checklist to apply to yourself.  But you need to go further as well.  Take an approach wherein you look to become more self-sufficient for any of the littler things and store up your scarce access to human reviewers for the truly weighty architectural decisions.  When these come up, enlist the help of someone else in your organization or even the internet.

On top of that, look opportunistically for ways to catch your own mistakes and improve.  Everyone has to learn from their mistakes, but with less margin for error, you need to learn from them and automate their prevention going forward.  Again, automated review helps here, but you'll need to get creative.

Having peer review yanked out from under you undeniably presents a challenge.  Luckily, however, you have more tools than ever at your disposal to pick up the slack.  Make use of them.  When you find yourself in a situation with the peer review safety net restored, you'll be an even better programmer for it.

Tools at your disposal

SubMain offers CodeIt.Right that easily integrates into Visual Studio for flexible and intuitive automated code review solution that works real-time, on demand, at the source control check-in or as part of your build.

Related resources

Learn more how CodeIt.Right can help you automate code reviews and improve your code quality.

About the Author

Erik Dietrich

I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

posted on Wednesday, 05 April 2017 07:17:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
 Tuesday, 28 March 2017

"You never concatenate strings.  Instead, always use a StringBuilder."

I feel pretty confident that any C# developer that has ever worked in a group has heard this admonition at least once.  This represents one of those bits of developer wisdom that the world expects you to just memorize.  Over the course of your career, these add up.  And once they do, grizzled veterans engage in a sort of comparative jousting for rank.  The internet encourages them and eggs them on.

"How can you call yourself a senior C# developer and not know how to serialize objects to XML?!"

With two evenly matched veterans swinging language swords at one another, this volley may continue for a while.  Eventually, though, one falters and pecking order is established.

Static Analyzers to the Rescue

I must confess.  I tend to do horribly at this sort of thing.  Despite having relatively good memory retention ability in theory, I have a critical Achilles Heel in this regard.  Specifically, I can only retain information that interests me.  And building up a massive arsenal of programming language "how-could-yous" for dueling purposes just doesn't interest me.  It doesn't solve any problem that I have.

And, really, why should it?  Early in my career, I figured out the joy of static analyzers in pretty short order.  Just as the ubiquity of search engines means I don't need to memorize algorithms, the presence of static analyzers saves me from cognitively carrying around giant checklists of programming sins to avoid.  I rejoiced in this discovery.  Suddenly, I could solve interesting problems and trust the equivalent of programmer spell check to take care of the boring stuff.

Oh, don't get me wrong.  After the analyzers slapped me, I internalized the lessons.  But I never bothered to go out of my way to do so.  I learned only in response to an actual, immediate problem.  "I don't like seeing warnings, so let me figure out the issue and subsequently avoid it."

My Coding Provincialism

This general modus operandi caused me to respond predictably when I first encountered the idea of globalization in language.  "Wait, so this helps when?  If someone theoretically deploys code to some other country?  And, then, they might see dates printed in a way that seems strange to them?  Huh."

For many years, this solved no actual problem that I had.  Early in my career, I wrote software that people deployed in the US.  Much of it had no connectivity functionality.  Heck, a lot of it didn't even have a user interface.  Worst case, I might later have to realize that some log file's time stamps happened in Mountain Time or something.

Globalization solved no problem that I had.  So when I heard rumblings about the "best practice," I generally paid no heed.  And, truth be told, nobody suffered.  With the software I wrote for many years, this would have constituted a premature optimization.

But it nevertheless instilled in me a provincialism regarding code.

A Dose of Reality

I've spent my career as a polyglot.  And so at one point, I switched jobs, and it took me from writing Java-based web apps to a desktop app using C# and WPF.  This WPF app happened to have worldwide distribution.  And, when I say worldwide, I mean just about every country in the world.

Suddenly, globalization went from "premature optimization" to "development table stakes."  And the learning curve became steep.  We didn't just need to account for the fact that people might want to see dates where the day, rather than the month, came first.  The GUI needed translation into dozens of languages as a menu setting.  This included languages with text read from right to left.

How did I deal with this?  At the time, I don't recall having the benefit of a static analyzer that helped in this regard.  FXCop may have provided some relief, but I don't recall one way or the other.  Instead, I found myself needing to study and laboriously create mental checklists.  This "best practice" knowledge hoarding suddenly solved an immediate problem.  So, I did it.

CodeIt.Right's Globalization Features

Years have passed since then.  I've had several jobs since then, and, as a solo consultant, I've had dozens of clients and gigs.  I've lost my once encyclopedic knowledge of globalization concerns.  That happened because -- you guessed it -- it no longer solves an immediate problem that I have.

Oh, I'd probably do better with it now than I did in the past.  But I'd still have to re-familiarize myself with the particulars and study up once again in order to get it right, should  the need arise.  Except, these days, I could enlist some help.  CodeIt.Right, installed on my machine, will give me the heads up I didn't have those years ago.  It has a number of globalization concerns built right in.  Specifically, it will remind you about the following concerns.  I'll just list them here, saving detailed explanations for a future "CodeIt.Right Rules, Explained" post.

  • Specify culture info
  • Specify string comparison (for culture)
  • Do not pass literals as localized parameters
  • Normalize strings to uppercase
  • Do not hard code locale specific strings
  • Use ordinal string comparison
  • Specify marshaling for PInvoke string arguments
  • Set locale for data types

That provides an excellent head start on getting savvy with globalization.

The Takeaway

Throughout the post, I've talked about my tendency not to bother with things that don't solve immediate problems for me.  I realize philosophical differences in approach exist, but I stand by this practice to this day.  And I don't say this only because of time savings and avoiding premature optimization.  Storing up an arsenal of specific "best practices" in your head threatens to entrench you in your ways and to establish an approach of "that's just how you do it."

And yet, not doing this can lead to making rookie mistakes and later repeating them.  But, for me, that's where automated tooling enters the picture.  I understand the globalization problem in theory.  That I have not forgotten.  And I can use a tool like CodeIt.Right to bridge the gap between theory and specifics in short order, creating just-in-time solutions to problems that I have.

So to conclude the post, I would offer the following in takeaway.  Stop memorizing all of the little things you need to check for at the method level in coding. Let tooling do that for you, so that you can keep big picture ideas in your head.  I'd say, "don't lose sight of the forest for the trees," but with tooling, you can see the forest and the trees.

Learn more how CodeIt.Right can help you automate code reviews, improve your code quality, and ensure your code is globalization ready.

About the Author

Erik Dietrich

I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

posted on Tuesday, 28 March 2017 07:10:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
 Tuesday, 21 March 2017

Today, I'd like to offer a somewhat lighthearted treatment to a serious topic.  I generally find that this tends to offer catharsis to the frustrated.  And the topic of code review tends to lead to lots of frustration.

When talking about code review, I always make sure to offer a specific distinction.  We can divide code reviews into two mutually exclusive buckets: automated and manual.  At first, this distinction might sound strange.  Most readers probably think of code reviews as activities with exclusively human actors.  But I tend to disagree.  Any static analyzer (including the compiler) offers feedback.  And some tools, like CodeIt.Right, specifically regard their suggestions and automated fixes as an automation of the code review process.

I would argue that automated code review should definitely factor into your code review strategy.  It takes the simple things out of the equation and lets the humans involved focus on more complex, nuanced topics.  That said, I want to ignore the idea of automated review for the rest of the post.  Instead, I'll talk exclusively about manual code reviews and, more specifically, where they tend to get ugly.

You should absolutely do manual code reviews.  Full stop.  But you should also know that they can easily go wrong and devolved into useless or even toxic activities.  To make them effective, you need to exercise vigilance with them.  And, toward that end, I'll talk about some manual code review anti-patterns.

The Gauntlet

First up, let's talk about a style of review that probably inspires the most disgust among former participants.  Here, I'm talking about what I call "the gauntlet."

In this style of code review, the person submitting for review comes to a room with a number of self-important, hyper-critical peers.  Of course, they might not view themselves as peers.  Instead, they probably imagine themselves as a panel of judges for some reality show.

From this 'lofty' perch, they attack the reviewee's code with a malevolent glee.  They adopt a derisive tone and administer the third degree.  And, frankly, they crush the spirit of anyone subject to this process, leaving low morale and resentment in their wake.

The Marathon

Next, consider a less awful, but not effective style of code review.  This one I call "the marathon."  I bet you can predict what I mean by this.

In the marathon code review, the participants sit in some conference room for hours.  It starts out as an enthusiastic enough affair, but as time passes, people's energy wanes.  Nevertheless, it goes on because of an edict that all code needs review and because everyone waited until the 11th hour.  And predictably, things get less careless as time goes on and people space out.

Marathon code reviews eventually reach a point where you might as well not bother.

The Scattershot Review

Scattershot reviews tend to occur in organizations without much rigor around the code review process.  Perhaps their process does not officially formally include code review.  Or, maybe, it offers on more specifics than "do it."

With a scattershot review process, the reviewer demonstrates no consistency or predictability in the evaluation.  One day he might suggest eliminating global variables, and on another day, he might advocate for them.  Or, perhaps the variance occurs depending on reviewer.  Whatever the specifics, you can rest assured you'll never receive the same feedback twice.

This approach to code review can cause some annoyance and resentment.  But morale issues typically take a backseat to simple ineffectiveness and churn in approach.

The Exam

Some of these can certainly coincide.  In fact, some of them will likely coincide.  So it goes with "the exam" and "the gauntlet."  But while the gauntlet focuses mostly on the process of the review, the exam focuses on the outcome.

Exam code reviews occur when the parlance around what happens at the end involves "pass or fail."  If you hear people talking about "failing" a code review, you have an exam on your hands.

Code review should involve a second set of eyes on something to improve it.  For instance, imagine that you wrote a presentation or a whitepaper.  You might ask someone to look it over and proofread it to help you improve it.  If they found a typo, they wouldn't proclaim that you had "failed."  They'd just offer the feedback.

Treating code reviews as exams generally hurts morale and causes the team to lose out on the collaborative dynamic.

The Soliloquy

The review style I call "the soliloquy" involves paying lip service to the entire process.  In literature, characters offer soliloquies when they speak their thoughts aloud regardless of anyone hearing them.  So it goes with code review styles as well.

To understand what I mean, think of times in the past where you've emailed someone and asked them to look at a commit.  Five minutes later, they send back a quick, "looks good."  Did they really review it?  Really?  You have a soliloquy when you find yourself coding into the vacuum like this.

The downside here should be obvious.  If people spare time for only a cursory glance, you aren't really conducting code reviews.

The Alpha Dog

Again, you might find an "alpha dog" in some of these other sorts of reviews.  I'm looking at you, gauntlet and exam.  With an alpha dog code review, you have a situation where a particularly dominant senior developer rules the roost with the team.  In that sense, the title refers both to the person and to the style of review.

In a team with a clear alpha dog, that person rules the codebase with an iron fist.  Thus the code review becomes an exercise in appeasing the alpha dog.  If he is present, this just results in him administering a gauntlet.  But, even absent, the review goes according to what he may or may not like.

This tends to lead team members to a condition known as "learned helplessness," wherein they cease bothering to make decisions without the alpha dog.  Obviously, this stunts their career development, but it also has a pragmatic toll for the team in the short term.  This scales terribly.

The Weeds

Last up, I'll offer a review issue that I'll call "the weeds."  This can happen in the most well meaning of situations, particularly with folks that love their craft.  Simply put, they get "into the weeds."

What I mean with this colloquialism is that they bogged down in details at the expense of the bigger picture.  Obviously, an exacting alpha dog can drag things into the weeds, but so can anyone.  They might wind up with a lengthy digression about some arcane language point, of interest to all parties, but not critical to shipping software.  And typically, this happens with things that you ought to make matters of procedures, or even to address with your automated code reviews.

The biggest issue with a "weeds" code review arises from the poor use of time.  It causes things to get skipped, or else it turns reviews into marathons.

Getting it Right

How to get code reviews right could easily occupy multiple posts.  But I'll close by giving a very broad philosophical outlook on how to approach it.

First of all, make sure that you get clarity up front around code review goals, criteria, and conduct.  This helps to stop anti-patterns wherein the review gets off track or bogged down.  It also prevents soliloquies and somewhat mutes bad behavior.  But, beyond that, look at code reviews as collaborative, voluntary sessions where peers try to improve the general codebase.  Some of those peers may have more or less experience, but everyone's opinion matters, and it's just that -- an opinion for the author to take under advisement.

While you might cringe at the notion that someone less experienced might leave something bad in the codebase, trust me.  The damage you do by allowing these anti-patterns to continue in the name of "getting it right" will be far worse.

Learn more how CodeIt.Right can help you automate code reviews and improve your code quality.

About the Author

Erik Dietrich

I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

posted on Tuesday, 21 March 2017 06:11:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
 Tuesday, 14 March 2017

Today, I'll do another installment of the CodeIt.Right Rules, Explained series.  I have now made four such posts in this series.  And, as always, I'll start off by citing my two personal rules about static analysis guidance.

  • Never implement a suggested fix without knowing what makes it a fix.
  • Never ignore a suggested fix without understanding what makes it a fix.

It may seem as though I'm playing rhetorical games here.  After all, I could simply say, "learn the reasoning behind all suggested fixes."  But I want to underscore the decision you face when confronted with static analysis feedback.  In all cases, you must actively choose to ignore the feedback or address it.  And for both options, you need to understand the logic behind the suggestion.

In that spirit, I'm going to offer up explanations for three more CodeIt.Right rules today.

Type that contains only static members should be sealed

Let's start here with a quick example.  I think this picture will suffice for some number of words, if not necessarily one thousand.

blog-codeitright-rules-part4-1

Here, I've laid a tiny seed for a Swiss Army Knife, "utils" class.  Presumably, I will continue to dump any method I think might help me with Linq into this class.  But for now, it contains only a single method to make things easy to understand.  (As an aside, I discourage "utils" classes as a practice.  I'm using this example because everyone reading has most assuredly seen one of these things at some point.)

When you run CodeIt.Right analysis on this code, you will find yourself confronted with a design issue.  Specifically, "types that contain only static members should be sealed."

You probably won't have a hard time discerning how to remedy the situation.  Adding the "sealed" modifier to the class will do the trick.  But why does CodeIt.Right object?

The Microsoft guidelines contain a bit more information.  They briefly explain that static analyzers make an inference about your design intent, and that you can better communicate that intent by using the "sealed" keyword.  But let's unpack that a bit.

When you write a class that has nothing but static members, such as a static utils class, you create something with no instantiation logic and no state.  In other words, you could instantiate "a LinqUtils," but you couldn't do anything with it.  Presumably, you do not intend that people use the class in that way.

But what about other ways of interacting with the class, such as via inheritance?  Again, you could create a LinqUtilsChild that inherited from LinqUtils, but to what end?  Polymorphism requires instance members, and non exist here.  The inheriting class would inherit absolutely nothing from its parent, making the inheritance awkward at best.

Thus the intent of the rule.  You can think of it telling you the following.  "You're obviously not planning to let people use inheritance with you, so don't even leave that door open for them to possibly make a mistake."

So when you find yourself confronted with this warning, you have a simple bit of consideration.  Do you intend to have instance behavior?  If so, add that behavior and the warning goes away.  If not, simply mark the class sealed.

Async methods should have async suffix

Next up, let's consider a rule in the naming category.  Specifically, when you name an async method with suffixing "async" on its name, you see the warning.  Microsoft declares this succinctly in their guidelines.

By convention, you append "Async" to the names of methods that have an async modifier.

So, CodeIt.Right simply tells us that we've run afoul of this convention.  But, again, let's dive into the reasoning behind this rule.

When Microsoft introduced this programming paradigm, they did so in a non-breaking release.  This caused something of a conundrum for them because of a perfectly understandable language rule stating that method overloads cannot vary only by a return type.  To take advantage of the new language feature, users would need to offer the new, async methods, and also backward compatibility with existing method calls.  This put them in the position of needing to give the new, async methods different names.  And so Microsoft offered guidance on a convention for doing so.

I'd like to make a call-out here with regard to my two rules at the top of each post.  This convention came about because of expediency and now sticks around for convention's sake.  But it may bother you that you're asked to bake a keyword into the name of a method.  This might trouble you in the same way that a method called "GetCustomerNumberString()" might bother you.  In other words, while I don't advise you go against convention, I will say that not all warnings are created equally.

Always define a global error handler

With this particular advice, we dive into warnings specific to ASP.  When you see this warning, it concerns the Global.asax file.  To understand a bit more about that, you can read this Stack Overflow question.  In short, Global.asax allows you to define responses to "system level" in a single place.

CodeIt.Right is telling you to define just such an event -- specifically one in response to the "Application_Error" event.  This event occurs whenever an exception bubbles all the way up without being trapped anywhere by your code somewhere.  And, that's a perfectly reasonable state of affairs -- your code won't trap every possible exception.

CodeIt.Right wants you to define a default behavior on application errors.  This could mean something as simple as redirecting to a page that says, "oops, sorry about that."  Or, it could entail all sorts of robust, diagnostic information.  The important thing is that you define it and that it be consistent.  You certainly don't want to learn from your users what your own application does in response to an error.

So spent a bit of time defining your global error handling behavior.  By all means, trap and handle exceptions as close to the source as you can.  But always make sure to have a backup plan.

Until Next Time

In this post, I ran the gamut across concerns.  I touched on an object-oriented design concern.  Then, I went into a naming consideration involving async, and, finally, I talked specifically about ASP programming considerations.

I don't have a particular algorithm for the order in which I cover these subjects.  But, I like the way this shook out.  It goes to show you that CodeIt.Right covers a lot of ground, across a lot of different landscapes of the .NET world.

Learn more how CodeIt.Right can help you automate code reviews and improve your code quality.

About the Author

Erik Dietrich

I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

posted on Tuesday, 14 March 2017 07:09:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
 Saturday, 11 March 2017

GhostDoc version 5.5 delivers compatibility with VS2017 RTM as well as a number of fixes:

  • VS2017 RTM support
  • GhostDoc is now also available as VSIX for VS2017
  • Documentation Hints no longer visible in the Debug mode
  • Fixed issue wrapping lines within the <value></value> tag
  • In the Offline Activation Preview - the fields are now auto-selected on focus/click for easy copying
  • GhostDoc is no longer adding extra line when re-documenting header in VB
  • GhostDoc is no longer appending generated XML comments to the existing comment when using auto-generated properties in VB

For the complete list of changes, please see What's New in GhostDoc v5

For overview of the v5.0 features, visit Overview of GhostDoc v5.0 Features

Download the new build at http://submain.com/download/ghostdoc/

posted on Saturday, 11 March 2017 17:46:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
 Tuesday, 07 March 2017

I have long since cast my lot with the software industry.  But, if I were going to make a commercial to convince others to follow suit, I can imagine what it would look like.  I'd probably feature cool-looking, clear whiteboards, engaged people, and frenetic design of the future.  And a robot or two.  Come help us build the technology of tomorrow.

Of course, you might later accuse me of bait and switch.  You entered a bootcamp, ready to build the technology of tomorrow.  Three years later, you found yourself on safari in a legacy code jungle, trying to wrangle some SharePoint plugin.  Erik, you lied to me.

So, let me inoculate myself against that particular accusation.  With a career in software, you will certainly get to work on some cool things.  But you will also find yourself doing the decidedly less glamorous task of software maintenance.  You may as well prepare yourself for that now.

The Conceptual Difference: Build vs Maintain

From the software developer's perspective, this distinction might evoke various contrasts.  Fun versus boring.  Satisfying versus annoying.  New problem versus solved problem.  My stuff versus that of some guy named Steve that apparently worked here 8 years ago.  You get the idea.

But let's zoom out a bit.  For a broader perspective, consider the difference as it pertains to a business.

blog-automation-software-maintenance-1Build mode (green field) means a push toward new capability.  Usually, the business will regard construction of this capability as a project with a calculated return on investment (ROI).  To put it more plainly, "we're going to spend $500,000 building this thing that we expect to make/save us $1.5 million by next year."

Maintenance mode, on the other hand, presents the business with a cost center.  They've now made their investment and (at least partially) realized return on it.  The maintenance team just hangs around to prevent backslides.  For instance, should maintenance problems crop up, you may lose customers or efficiency.

Plan of Attack: Build vs Maintain

Because the business regards these activities differently, it will attack them differently.  And, while I can't speak to every conceivable situation, my consulting work has shown me wide variety.  So I can speak to general trends.

In green field mode, the business tends to regard the work as an investment.  So, while management might dislike overruns and unexpected costs, they will tend to tolerate them more.  Commonly, you see a "this will pay off later" mentality.

On the maintenance side of things, you tend to see far less forgiveness.  Certainly, all parties forgive unexpected problems a lot less easily.  They view all of it as a burden.

This difference in attitude translates to the planning as well.  Green field projects justifiably command full time people for the duration of the project.  Maintenance mode tends to get you familiar with the curious term "half of a person."  By this, I mean you hear things like "we're done with the Sigma project, but someone needs to keep the lights on.  That'll be half of Alice."  The business grudgingly allocates part time duty to maintenance tasks.

Why?  Well, maintenance tends to arise out of reactive scenarios.

Reactive Mode and the Value of Automation

Maintenance mode in software will have some planned activities, particularly if it needs scheduled maintenance.  But most maintenance programmers find themselves in a reactive, "wait and see" kind of situation.  They have little to do on the project in question until an outage happens, someone discovers a bug, or a customer requests a new feature.  Then, they spring into action.

Business folks tend to hate this sort of situation.  After all, you need to plan for this stuff, but you might have someone sitting around doing nothing.  It is from this fundamental conundrum that "half people" and "quarter people" arise.  Maintenance programmers usually have other stuff to juggle along with maintaining "Sigma."

You should automate this stuff during green field time, when management is willing to invest. If you tell them it means less maintenance cost, they'll probably bite.

Because of this double duty, the business doubles down on pressure to minimize maintenance.  After all, not only does it create cost, but it takes the people away from other, profit-driven things that they could otherwise do.

So how do we, as programmers, and we, as software shops, best deal with this?  We make maintenance as turnkey as possible by automating as much as possible.  Oh, and you should automate this stuff during green field time, when management is willing to invest.  If you tell them it means less maintenance cost, they'll probably bite.

Automate the Test Suite

First up for automation candidates, think of the codebase's test suite.  Hopefully, you've followed my advice and built this during green field mode.  But, if not, it's never too late to start.

Think of how time consuming a job QA has.  If manually running the software and conducting experiments constitutes the entirety of your test strategy, you'll find yourself hosed at maintenance time.  With "half a person" allocated, no one has time for that.  Without an automated suite, then, testing falls by the wayside, making your changes to a production system even more risky.

You need to automate a robust test suite that lets you know if you have broken anything.  This becomes even more critical when you consider that most maintenance programmers haven't touched the code they modify in a long time, if ever.

Automate Code Reviews

If I were to pick a one-two punch for code quality, that would involve unit tests and code review.  Therefore, just as you should automate your test suite, you should automate your code review as well.

If you think testing goes by the wayside in an under-staffed, cost-center model, you can forget about peer review altogether.  During the course of my travels, I've rarely seen code review continue into maintenance mode, except in regulated industries.

Automated code review tools exist, and they don't require even "half a person."  An automated code review tool serves its role without consuming bandwidth.  And, it provides maintenance programmers operating in high risk scenarios with a modicum of comfort and safety net.

Automate Production Monitoring

For my last maintenance mode automation tip of the post, I'll suggest that you automate production monitoring capabilities.  This covers a fair bit of ground, so I'll generalize by saying these include anything that keeps your finger on the pulse of your system's production behavior.

You have logging, no doubt, but do you monitor the logs?  Do you keep track of system outages and system load?  If you roll software to production, do you have a system of checks in place to know if something smells fishy?

You want to make the answer to these questions, "yes."  And you want to make the answer "yes" without you needing to go in and manually check.  Automate various means of monitoring your production software and providing yourself with alerts.  This will reduced maintenance costs across the board.

Automate Anything You Can

I've listed some automation examples that come to mind as the most critical, based on my experience.  But, really, you should automate anything around the maintenance process that you can.

Now, you might think to yourself, "we're programmers, we should automate everything."  Well, that subject could make for a whole post in and of itself, but I'll speak briefly to the distinction.  Build mode usually involves creating something from nothing on a large scale.  While you can automate the scaffolding around this activity, you'll struggle to automate a significant amount of the process.

But that ratio gets much better during maintenance time.  So the cost center nature of maintenance, combined with the higher possible automation percentage, makes it a rich target.  Indeed, I would argue that strategic automation defines the art of maintenance.

Tools at your disposal

SubMain offers CodeIt.Right that easily integrates into Visual Studio for flexible and intuitive automated code review solution that works real-time, on demand, at the source control check-in or as part of your build.

Related resources

Learn more how CodeIt.Right can help you automate code reviews and improve your code quality.

About the Author

Erik Dietrich

I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

posted on Tuesday, 07 March 2017 09:04:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
 Tuesday, 28 February 2017

For the last several years, I've made more and more of my living via entrepreneurial pursuits.  I started my career as a software developer and then worked my way along that career path before leaving fulltime employment to do my own thing.  These days, I consult, but I also make training content, write books, and offer productized services.

When you start to sell things yourself, you come to appreciate the value of marketing.  As a techie, this feels a little weird to say, but here we are.  When you have something of value to offer, marketing helps you make interested parties aware of your offer.  I think you'd like this and find it worth your money, if you gave it a shot.

In pursuit of marketing, you can use all manner of techniques.  But today, I'll focus on a subtle one that involves generating a good reputation with those who do buy your products.  I want to talk about making good documentation.

The Marketing Importance of Documentation

This probably seems an odd choice for a marketing discussion.  After all, most of us think of marketing as what we do before a purchase to convince customers to make that purchase.  But repeat business from customer loyalty counts for a lot.  Your loyal customers provide recurring revenue and, if they love their experience, they may evangelize for your brand.

Providing really great documentation makes an incredible difference for your product.  I say this because it can mean the difference between frustration and quick, easy wins for your user base.  And, from a marketing perspective, which do you think makes them more likely to evangelize?  Put yourself in their shoes.  Would you recommend something hard to figure out?

For a product with software developers as an end user, software documentation can really go a long way.  And with something like GhostDoc's "build help documentation" feature, you can notch this victory quite easily.  But the fact that you can generate that documentation isn't what I want to talk about today, specifically.

Instead, I want to talk about going the extra mile by customizing it.

Introducing "Conceptual Content"

You can easily generate documentation for your API with the click of a button.  But you can also do a lot more.

GhostDoc Enterprise features something called "Conceptual Content."  Basically, it allows you to customize and add on to what the engine generates using your code and XML doc comments.  This comes in handy in ways limited only by your imagination, but here are some ideas.

  • A welcome page.
  • A support page.
  • A "what's new" page.
  • Including a EULA/license.
  • Custom branding.

You probably get the idea.  If you already look to provide documentation for your users, you no doubt have some good additional thoughts for what they might value.  Today, I'm going to show you the simplest way to get going with conceptual content so that you can execute on these ideas of yours.

How It Works at a High Level

For GhostDoc to work its documentation-generating magic, it creates a file in your solution directory named after your solution.  For instance, with my ChessTDD solution, it generates a file called "ChessTDD.sln.GhostDoc.xml."  If you crack open this file, you will see settings mirroring the ones you select in Visual Studio when using GhostDoc's "Build Help Documentation."

To get this going, we face the task of basically telling this file about custom content that we will create.  First, close out of Visual Studio, and let's get to work hacking at this thing a bit.  We're going to add a simple, text-based welcome page to the standard help documentation.  To do this, we need the following things.

  • Modifications to the GhostDoc XML file.
  • The addition of a "content" file describing our custom content,
  • Addition of a "content" folder containing the AML files that make up the actual, custom pages.

Let's get started.

The Nuts and Bolts

First, open up the main GhostDoc XML file and look for the "ConceptualContent" section.  It looks like this.

<ConceptualContent>
  <ContentLayout />
  <MediaContent />
</ConceptualContent>

In essence, this says, "no conceptual content here."  We need to change that.  So, replace the empty ContentLayout entry with this (substituting the name of your solution for "ChessTDD" if you want to follow along with your own instead of my ChessTDD code.)

<ContentLayout file="ChessTDD.content" folder="Content\" />

Next up, you need to create the file you just told it about, ChessTDD.content.  This file goes in the same directory as your solution and looks like this.

<?xml version="1.0" encoding="utf-8"?>
<Topics>
  <Topic id="4684cc2f-3179-4871-b7a4-ad69f0f260a3" visible="True" isDefault="true" title="Welcome">
    <HelpKeywords>
      <HelpKeyword index="K" term="Welcome" />
    </HelpKeywords>
  </Topic>
</Topics>

For the ID, I simply generated a GUI using this site.  This ID simply needs to be unique, and to match the next file that we'll create.  Next up, create the folder you told ContentLayout about called, "Content."  Then add the file Welcome.aml to that folder, with the following text.

<?xml version="1.0" encoding="utf-8"?>
<topic id="4684cc2f-3179-4871-b7a4-ad69f0f260a3" revisionNumber="1">
  <developerConceptualDocument
    xmlns="http://ddue.schemas.microsoft.com/authoring/2003/5"
    xmlns:xlink="http://www.w3.org/1999/xlink">
    
    <introduction>
      <para>Welcome to our help section!</para>
    </introduction>

  </developerConceptualDocument>
</topic>

Notice that we use the same GUID here as in the content file.  We do this in order to link the two.

Let's Give it a Whirl

With your marked up Ghost Doc XML file, the new content file, and the new Content folder and welcome AML file, you can now re-launch Visual Studio.  Open the solution and navigate through GhostDoc to generate the help documentation CHM file.

blog-custom-content-help-docs

There you have it.  Now you can quickly add a page to the automatically generated help documentation.

Keep in mind that I did the absolute, dead simplest possible thing I could do for demonstration purposes.  You can do much more.  For example:

  • Adding images/media to the pages.
  • Have cross-links in there for reference.
  • Add snippets and examples.
  • Build lists and tables.

As I said earlier, you'll no doubt think of all manner of things to please your user base with this documentation.  I suggest getting in there, making it your own, and leaving a nice, personal touch on things for them.  When it comes to providing a good user experience, a little can go a long way.

Related resources


Learn how GhostDoc can help to simplify your XML Comments, produce and maintain quality help documentation.

About the Author

Erik Dietrich

I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

posted on Tuesday, 28 February 2017 09:07:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
 Tuesday, 21 February 2017

In what has become a series of posts, I have been explaining some CodeIt.Right rules in depth.  As with the last post in the series, I'll start off by citing two rules that I, personally, follow when it comes to static code analysis.

  • Never implement a suggested fix without knowing what makes it a fix.
  • Never ignore a suggested fix without understanding what makes it a fix.

It may seem as though I'm playing rhetorical games here.  After all, I could simply say, "learn the reasoning behind all suggested fixes."  But I want to underscore the decision you face when confronted with static analysis feedback.  In all cases, you must actively choose to ignore the feedback or address it.  And for both options, you need to understand the logic behind the suggestion.

In that spirit, I'm going to offer up explanations for three more CodeIt.Right rules today.

Use Constants Where Appropriate

First up, let's consider the admonition to "use constants where appropriate."  Consider this code that I lifted from a Github project I worked on once.

blog-codeitright-rules-part3-1

I received this warning on the first two lines of code for this class.  Specifically, CodeIt.Right objects to my usage of static readonly string. If I let CodeIt.Right fix the issue for me, I wind up with the following code.

blog-codeitright-rules-part3-2

Now, CodeIt.Right seems happy.  So, what gives?  Why does this matter?

I'll offer you the release notes of the version where CodeIt.Right introduced this rule.  If you look at the parenthetical next to the rule, you will see "performance."  This preference has something to do with code performance.  So, let's get specific.

When you declare a variable using const or static readonly, think in terms of magic values and their elimination.  For instance, imagine my UserAgentKey value.  Why do you think I declare that the way I did?  I did it to name that string, rather than using it inline as a "magic" string. 

As a maintenance programmer, how frustrating do you find stumbling across lines of code like, "if(x == 299)"?  "What is 299, and why do we care?!"

So you introduce a variable (or, preferably, a constant) to document your intent.  In the made-up hypothetical, you might then have "if(x == MaximumCountBeforeRetry)".  Now you can easily understand what the value means.

Either way of declaring this (constant or static, readonly field) serves the replacement purpose.  In both cases, I replace a magic value with a more readable, named one.  But in the case of static readonly, I replace it with a variable, and in the case of const, I replace it with, well, a const.

From a performance perspective, this matters.  You can think of a declaration of const as simply hard-coding a value, but without the magic.  So, when I switch to const, in my declaration, the compiler replaces every version of UserAgentKey with the string literal "user-agent".  After compilation, you can't tell whether I used a const or just hard-coded it everywhere.

But with a static readonly declaration, it remains a variable, even when you use it like a constant.  It thus incurs the relative overhead penalty of performing a variable lookup at runtime.  For this reason, CodeIt.Right steers you toward considering making this a constant.

Parameter Names Should Match Base Declaration

For the next rule, let's return to the Github scraper project from the last example.  I'll show you two snippets of code.  The first comes from an interface definition and the second from a class implementing that interface.  Pay specific attention to the method, GetRepoSearchResults.

blog-codeitright-rules-part3-3

blog-codeitright-rules-part3-4

If you take a look at the parameter names, it probably won't surprise you to see that they do not match.  Therein lies the problem that CodeIt.Right has with my code.  It wants the implementing class to match the interface definition (i.e. the "base").  But why?

In this case, we have a fairly simple answer.  Having different names for the conceptually same method creates confusion. 

Specifically, maintainers will struggle to understand whether you meant to override or overload the method.  In our mind's eyes, identical method signatures signals polymorphic approaches, while same name, different parameters signals overload.  In a sense, changing the name of a variable fakes maintenance programmers out.

Do Not Declare Externally Visible Instance Fields

I don't believe we need a screenshot for this one.  Consider the following trivial code snippet.

public class SomeClass
{
    public string _someVariable;
}

This warning says, "don't do that."  More specifically, don't declare an instance field with external (to the type) visibility.  The question is, "why not?"

If you check out the Microsoft guidance on the subject, they explain that, the "use of a field should be as an implementation detail."  In other words, they contend that you violate encapsulation by exposing fields.  Instead, they say, you should expose this via a property (which simply offers syntactic sugar over a method).

Instead of continuing with abstract concepts, I'll offer a concrete example.  Imagine that you want to model a family and you declare an integer field called _numberOfChildren. That works fine initially, but eventually you encounter the conceptually weird edge case where someone tries to define a family with -1 children.  With an integer field, you can technically do this, but you want to prevent that from happening.

With clients of your class directly accessing and setting this field, you wind up having to go install this guard logic literally everywhere your clients interact with the field.  But had you hidden the field behind a property, you could simply add logic to the property setter wherein you throw an exception on an attempt to set a negative value.

This rule attempts to help you future-proof your code and follow good OO practice.

Until Next Time

Somewhat by coincidence, this post focused heavily on the C# flavor of object-oriented programming.  We looked at constants versus field access, but then focused on polymorphism and encapsulation.

I mention this because I find it interesting to see where static analyzers take you.  Follow along for the rest of the series and, hopefully, you'll learn various useful nuggets about the language you use.

Learn more how CodeIt.Right can help you automate code reviews and improve your code quality.

About the Author

Erik Dietrich

I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

posted on Tuesday, 21 February 2017 09:55:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
 Tuesday, 14 February 2017

Editorial note: most of the GhostDoc bloopers below are a history, but they provide for a good laugh. We encourage you to share the GhostDoc goofs you have encountered – they are great for entertainment as well as give us a chance to improve the product.

Some years ago, I was doing work for some client or another.  Honestly, I have no recollections of specifics with the exception of a preference for exhaustive commenting.  Every class, every method, every property, and every field.

Of course, I didn't learn this at first.  I didn't even learn it in a reasonable time frame.  Instead, I learned it close to handover time.  And so things got a little desperate.

Enter GhostDoc, My Salvation

Now, depending on your perspective, you might scold me for not diligently commenting all along.  I will offer the explanation that the code had no public component and no intended APIs or extensions.  It also required no "why" types of explanations; this was simple stuff.

The client cited policy.  "We comment everything, and we're taking over this code, so we want you to do the same."  Okie Dokie.

Now, I knew that in a world of code generation and T4 templates, someone must have invented a tool that would generate some sort of comments or another.  At the time, a quick Google search brought me to a saving grace: the free tool GhostDoc.

While it did not allow me to carpet bomb my code with comments in a single click (and understandably so), it did allow me to do it for entire files at a time.  Good enough.  I paid my non-commenting penance by spending an hour or so commenting this way.

And do you know what?  It generated pretty respectable comments.  I recall feeling impressed because I expected empty template comments.  Instead, GhostDoc figured out how to string some verbs and nouns together.

But It's Not All Roses

Let me get specific about capabilities and limitations.  Had I tried to generate documentation for my Chess TDD codebases's MovePiece method, GhostDoc would have treated me to this comment.

/// <summary>
/// Moves the piece.
/// </summary>
/// <param name="origin">The origin.</param>
/// <param name="destination">The destination.</param>
public void MovePiece(BoardCoordinate origin, BoardCoordinate destination)
{
    VerifyCoordinatesOrThrow(origin, destination);

    var pieceToMove = GetPiece(origin);
    AddPiece(pieceToMove, destination);
    RemovePiece(origin);
    pieceToMove.HasMoved = true;

    ReconcileEnPassant(origin, destination, pieceToMove);
}

See what I mean?  I did a pretty good job with clear naming in this method, and GhostDoc rewarded me with a method header that stands on its own.  MovePiece() moves the piece.  Specifically, it moves the origin to the destination.  Not too shabby!

But what about less intuitive naming structures?  Well, sometimes, things can get weird.

Do you know there was a Visual Studio addon that generated comments from method names? I once inherited a project full of these. pic.twitter.com/fuFjZjz4LJ

— Dan Abramov (@dan_abramov) December 15, 2016

Yes, that's right.  The ubiquitous "main" method for a C# application "mains the args."

What Else?

Okay, that ought to furnish a chuckle.  But now that you see the issue, you can probably start to imagine all manner of ways to flummox the poor tool.  (As an aside, you can probably also understand why they want to force you to pay more attention to the generated comments than just saying "generate comments for the whole solution.")

What do you think used to happen back then for a no-op method that you named "DoNothing?"  You'd get "Does the Nothing."  Good work on conjugation, but still missing the intent.  How about an event handler called "AfterInsert?"  That's right.  "Afters the Insert."  And "TaskComplete?"  "Tasks the completed."  Now, here's a tougher one: "OnClose."  You probably weren't expecting "Oncloses the instance."

Of course, other idiomatic programming concerns also befuddled the tool in different ways.

  • ToModel() --> "Toes the model."
  • ToTitleCase() --> "Toes the title case."
  • ToComment() -> "Automatics the comment"
  • ForgetPassword() --> "Forgets password"
  • SignOut() --> "Signs the out"

Or how about

Historically, you could make GhostDoc generate some pretty hilarious comments.  This means that if you simply generated the comments and paid no further attention, you might look pretty absurd to someone auditing commit history.  Fortunately, my oversight of the generated comments saved me from that fate.

Laugh at the Tool?

So should you laugh at GhostDoc?  Well, sure.  These comments are funny, and everyone ought to have a sense of humor.

But on a deeper level, no, not really.  Natural language processing presents a difficult problem.  Imagine yourself writing code to process class, method, and variable names.  At first, you might thing it easy.  But then you'd hit things like "ToString()" and "TaskComplete()."  As a human and programmer, you understand these nuances.  But how do you make your code do the same?

Oh, sure, you can hard-code some overrides for unique situations.  But that will quickly become cumbersome.  As you can see, you have a hard problem on your hands.

What You Can Do

What should you do, then, as a user of GhostDoc?  Well, first and foremost, don't just mindlessly generate comments.  At the bare minimum, do what I did.  Mindlessly generate comments, but also look to make sure that they make sense, correcting any that don't.

But, beyond that, use GhostDoc as a starting point for meaningful communication.  Generate XML doc comments only when doing so to communicate with someone, either via code comments or IntelliSense/help documentation.  And when you do that, understand that GhostDoc simply offers you an intelligent starting point.  The tool does not purport to replace you as a programmer or a communicator of your intent.  It just gets you started.

Improving the Situation

Of course, GhostDoc has improved over the years.  For instance, if I write a ToString() method and generate, it no longer offers me a "toes the string."  Instead, it does this.

/// <summary>
/// Returns a <see cref="System.String" /> that represents this instance.
/// </summary>
/// <returns>A <see cref="System.String" /> that represents this instance.</returns>
public string ToString()
{
    return "I'm a board.";
}

Now, GhostDoc actually does something pretty sophisticated.  It figures out that ToString() has a special place in the language and adapts accordingly, explaining that place with a marked-up comment.

Similarly, if you try to reproduce some of the shenanigans I detailed above, you will receive less obtuse results.  The people working on the GhostDoc codebase are constantly seeking to improve the quality of the generated comments, even if they only serve as a starting point.  But they could always use some help, so here is what I propose.

Luckily, there is a Summary Override feature in GhostDoc, and you should use it to improve the future generated comments once you find a new blooper.

Write in or comment with funny or weird things that you can get GhostDoc to produce when you generate documentation.  Seriously.  It'll give you, me, the readership, and the tool authors a good laugh.  But it will also bring deficiencies to their attention, allowing them to address, correct, and improve.  So go ahead.  What are your favorite GhostDoc bloopers?

Please be sure to download the latest version of GhostDoc so you don’t have to deal with the generated comments quoted in this post.

Learn more about how GhostDoc can help simplify your XML Comments, produce and maintain quality help documentation.

About the Author

Erik Dietrich

I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

posted on Tuesday, 14 February 2017 10:01:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
 Monday, 06 February 2017

If you haven't lived in a techie cave the last 10 years, you've probably noticed JavaScript's rise to prominence.  Actually, forget prominence.  JavaScript has risen to command consideration as today's lingua franca of modern software development.

I find it sort of surreal to contemplate that, given my own backstory.  Years (okay, almost 2 decades) ago, I cut my teeth with C and C++.  From there, I branched out to Java, C#, Visual Basic, PHP, and some others I'm probably forgetting.  Generally speaking, I came of age during the heyday of object oriented programming.

Oh, sure I had awareness of other paradigms.  In college, I had dabbled with (at the time) the esoteric concept of functional programming.  And I supplemented "real" programming work with scripts as needed to get stuff done.  But object-oriented languages gave us the real engine that drove serious work.

JavaScript fell into the "scripting" category for me, when I first encountered it, probably around 2001 or 2002.  It and something called VBScript competed for the crown of "how to do weird stuff in the browser, half-baked hacky languages."  JavaScript one that battle and cemented itself in my mind as "the thing to do when you want an alert box in the browser."

Even as it has risen to prominence and inspired a generation of developers, I suppose I've never really shed my original baggage with it.  While I conceptually understand its role as "assembly language of the web," I have a hard time not seeing the language that was written in 10 days and named to deliberately confuse people.

GhostDoc, Help, and IntelliSence

I lead with all of this to help you understand the lens through which to read this post.  As a product of the strongly typed, object-oriented wave of programmers, my subconscious still regards JavaScript as something of an afterthought, its dominance notwithstanding.  And so when I see advances in the JavaScript world, I tend to think, "Oh, cool, you can do that with JavaScript too!"

I'll come back to that shortly.  But first, I'd like to remind you of a prominent GhostDoc feature.  Specifically, I'm referring to the "Document This" capability.  With your cursor inside of a method or type, you can use this capability to generate instant, well-formatted, XML doc comments.  Thoroughly documented code sits just a "Ctrl-Shift-D" away.

If you work with C# a lot and generate public APIs and/or help documentation, you will love this capability.  It doesn't absolve you of needing to add specific contest.  But it does give you a thorough base upon which to build.  For example, consider this method from my ChessTDD example codebase.

/// <summary>
///  Gets the moves from.
/// </summary>
/// <param name="startingLocation">The starting location.</param>
/// <param name="boardSize">Size of the board.</param>
/// <returns>IEnumerable&lt;BoardCoordinate&gt;.</returns>
public override IEnumerable<BoardCoordinate> GetMovesFrom(
BoardCoordinate startingLocation,
int boardSize = Board.DefaultBoardSize) { var oneSquareAwayMoves = GetAllRadialMovesFrom(startingLocation, 1); return oneSquareAwayMoves.Where(bc => bc.IsCoordinateValidForBoardSize(boardSize)); }

I took that un-commented method and used GhostDoc to generate this.  Now, I Should probably update the summary, but I really don't see any other deficiencies here.  It nails the parameter names, and it even escapes the generic return type for readability.

It Works with JavaScript, Too

Now, as I've said earlier, I don't really play much in the JavaScript world.  I generally focus on application code and server side stuff, delving into the realm of the client side browser only when necessary to get things done.  So you can imagine my reaction to learn that GhostDoc can do this with JavaScript too.  It can produce sophisticated XML doc comments that allow for use in generating help and with IntelliSence.

Actually, if I really want to embarrass myself, I'll confess that my initial reaction was, "JavaScript code has IntelliSence?"  I think maybe I knew that, but I assumed that such a thing would offer little value in a dynamically typed language.  Shows what I know.  But then, once I got over that initial wave of ignorance, I thought, "cool!" at the idea that you could generate documentation for JavaScript.  I then wondered if well-documented JavaScript was more or less rare than well documented C#.  That jury has yet to come back, as far as I know.

Let's Take a Look

Let's take a look at what GhostDoc actually does here.  To demonstrate, I created a branch of my ChessTDD project, to which I added an ASP MVC front end.  This plopped some JavaScript right in my lap, in the form of the default files in the "Scripts" folder.  To see this in action, I popped open modernizr-2.6.2.js and found a method upon which to experiment.

blog-ghostdoc-with-javacript-01

ShivMethods sounded... interesting, in a prison yard sort of way.  As a publicly consumable library, it already has documentation.  And, as I learned by playing around, that documentation already gets picked up by IntelliSense.  But I figured I'd see what happened by using GhostDoc anyway, for comparison's sake.

So I lazily put my cursor randomly inside the method and fired away with a Ctrl-Shift-D.  Check it out.

blog-ghostdoc-with-javacript-02

As you can see, GhostDoc puts the comment inside of the method and behaves largely as it would with C#.  In case you're wondering about the stylistic difference, GhostDoc does this because it's the more Microsoft-preferred way.  To underscore that, look what happens now when we consume this method with IntelliSense.

blog-ghostdoc-with-javacript-03

As you can see, IntelliSense gives preference to the comments generated by GhostDoc.

The Takeaway

What's my main point here?  I simply wanted to share that you can use GhostDoc for JavaScript just as you can with C# or VB.

Forget about my crotchety ways and biases.  Your JavaScript/client-side code is every bit as important as what you write anywhere else.  You should treat it as such when documenting it, creating help for it, and making sure users know how to consume your libraries.  So make sure you've got Ctrl-Shift-D at the ready when using Visual Studio, regardless of where the code gets executed.

Learn more about how GhostDoc can help simplify your XML Comments, produce and maintain quality help documentation.

About the Author

Erik Dietrich

I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

posted on Monday, 06 February 2017 11:26:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
 Monday, 30 January 2017

For years, I can remember fighting the good fight for unit testing.  When I started that fight, I understood a simple premise.  We, as programmers, automate things.  So, why not automate testing?

Of all things, a grad school course in software engineering introduced me to the concept back in 2005.  It hooked me immediately, and I began applying the lessons to my work at the time.  A few years and a new job later, I came to a group that had not yet discovered the wonders of automated testing.  No worries, I figured, I can introduce the concept!

Except, it turns out that people stuck in their ways kind of like those ways.  Imagine my surprise to discover that people turned up their nose at the practice.  Over the course of time, I learned to plead my case, both in technical and business terms.  But it often felt like wading upstream against a fast moving current.

Years later, I have fought that fight over and over again.  In fact, I've produced training materials, courses, videos, blog posts, and books on the subject.  I've brought people around to see the benefits and then subsequently realize those benefits following adoption.  This has brought me satisfaction.

But I don't do this in a vacuum.  The industry as a whole has followed the same trajectory, using the same logic.  I count myself just another advocate among a euphony of voices.  And so our profession has generally come to accept unit testing as a vital tool.

Widespread Acceptance of Automated Regression Tests

In fact, I might go so far as to call acceptance and adoption quite widespread.  This figure only increases if you include shops that totally mean to and will definitely get around to it like sometime in the next six months or something.  In other words, if you count both shops that have adopted the practice and shops that feel as though they should, acceptance figures certainly span a plurality.

Major enterprises bring me in to help them teach their developers to do it.  Still, other companies consult and ask questions about it.  Just about everyone wants to understand how to realize the unit testing value proposition of higher quality, more stability, and fewer bugs.

This takes a simple form.  We talk about unit testing and other forms of testing, and sometimes this may blur the lines.  But let's get specific here.  A holistic testing strategy includes tests at a variety of granularities.  These comprise what some call "the test pyramid."  Unit tests address individual components (e.g. classes), while service tests drive at the way the components of your application work together.  GUI tests, the least granular of all, exercise the whole thing.

Taken together, these comprise your regression test suite.  It stands against the category of bugs known as "regressions," or defects where something that used to work stops working.  For a parallel example in the "real world" think of the warning lights on your car's dashboard.  "Low battery" light comes on because the battery, which used to work, has stopped working.

Benefits of Automated Regression Test Suites

Why do this?  What benefits to automated regression test suites provide?  Well, let's take a look at some.

  • Repeatability and accuracy.  A human running tests over and over again may produce slight variances in the tests.  A machine, not so much.
  • Speed.  As with anything, automation produces a significant speedup over manual execution.
  • Fast feedback.  The automated test suite can tell you much more quickly if you have broken something.
  • Morale.  The fewer times a QA department comes back with "you broke this thing," the fewer opportunities for contentiousness.

I should also mention, as a brief aside, that I don't consider automated test suites to be acceptable substitutes for manual testing.  Rather, I believe the two efforts should work in complementary fashion.  If the automated test suite executes the humdrum tests in the codebase, it frees QA folks up to perform intelligent, exploratory testing.  As Uncle Bob once famously said, "it's wrong to turn humans into machines.  If you can write a script for a test procedure, then you can write a program to execute that procedure."

Automating Code Review

None of this probably comes as much of a shock to you.  If you go out and read tech blogs, you've no doubt encountered the widespread opinion that people should automate regression test suites.  In fact, you probably share that opinion.  So don't you wonder why we don't more frequently apply that logic to other concerns?

Take code review, for instance.  Most organizations do this in entirely manual fashion outside of, perhaps, a so-called "linting" tool.  They mandate automated test coverage and then content themselves with sicking their developers on one another in meetings to gripe over tabs, spaces, and camel casing.

Why not approach code review the same way?  Why not automate the aspects of it that lend themselves to automation, while saving human intervention for more conceptual matters?

Benefits of Automated Code Reviews

In a study by Steve McConnell and referenced in this blog post, "formal code inspections" produced better results for preemptively finding bugs than even automated regression tests.  So it stands to reason that we should invest in code review in the same ways that we invest in regression testing.  And I don't mean simply time spent, but in driving forward with automation and efficiency.

Consider the benefits I listed above for automated tests, and look how they apply to automated code review.

  • Repeatability and accuracy.  Humans will miss instances of substandard code if they feel tired -- machines won't.
  • Speed.  Do you want your code review to take seconds or in hours/days.
  • Fast feedback.  Because of the increased speed of the review, the reviewee gets the results immediately after writing the code, for better learning.
  • Morale.  The exact same reasoning applies here.  Having a machine point out your mistakes can save contentiousness.

I think that we'll see a similar trajectory to automating code review that we did with automating test suites.  And, what's more, I think that automated code review will gain steam a lot more quickly and with less resistance.  After all, automating QA activities blazed a trail.

I believe the biggest barrier to adoption, in this case, is the lack of awareness.  People may not believe automating code review is possible.  But I assure you, you can do it.  So keep an eye out for ways to automate this important practice, and get in ahead of the adoption curve.

Related resources

Tools at your disposal

SubMain offers CodeIt.Right that easily integrates into Visual Studio for flexible and intuitive automated code review solution that works real-time, on demand, at the source control check-in or as part of your build.

Learn more how CodeIt.Right can help you automate code reviews and improve your code quality.

About the Author

Erik Dietrich

I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

posted on Monday, 30 January 2017 15:52:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
 Monday, 23 January 2017

As a teenager, I remember having a passing interest in hacking.  Perhaps this came from watching the movie Sneakers.  Whatever the origin, the fancy passed quickly because I prefer building stuff to breaking other people's stuff.  Therefore, what I know about hacking pretty much stops at understanding terminology and high level concepts.

Consider the term "zero day exploit," for instance.  While I understand what this means, I have never once, in my life, sat on discovery of a software vulnerability for the purpose of using it somehow.  Usually when I discover a bug, I'm trying to deposit a check or something, and I care only about the inconvenience.  But I still understand the term.

"Zero day" refers to the amount of time the software vendor has to prepare for the vulnerability.  You see, the clever hacker gives no warning about the vulnerability before using it.  (This seems like common sense, though perhaps hackers with more derring do like to give them half a day to watch them scramble to release something before the hack takes effect.)  The time between announcement and reality is zero.

Increased Deployment Cadence

Let's co-opt the term "zero day" for a different purpose.  Imagine that we now use it to refer to software deployments.  By "zero day deployment," we thus mean "software deployed without any prior announcement."

blog-are-you-ready-for-zero-day-software-deploymentBut why would anyone do this?  Don't you miss out on some great marketing opportunities?  And, more importantly, can you even release software this quickly?  Understanding comes from realizing that software deployment is undergoing a radical shift.

To understand this think about software release cadences 20 years ago.  In the 90s, Internet Explorer won the first browser war because it managed to beat Netscape's plodding release of going 3 years between releases.  With major software products, release cadences of a year or two dominated the landscape back then.

But that timeline has shrunk steadily.  For a highly visible example, consider Visual Studio.  In 2002, 2005, 2008, Microsoft released versions corresponding to those years.  Then it started to shrink with 2010, 2012, and 2013.  Now, the years no longer mark releases, per se, with Microsoft actually releasing major updates on a quarterly basis.

Zero Day Deployments

As much as going from "every 3 years" to "every 3 months" impresses, websites and SaaS vendors have shrunk it to "every day."  Consider Facebook's deployment cadence.  They roll minor updates every business day and major ones every week.

With this cadence, we truly reach zero day deployment.  You never hear Facebook announcing major upcoming releases.  In fact, you never hear Facebook announcing releases, period.  The first the world sees of a given Facebook release is when the release actually happens.  Truly, this means zero day releases.

Oh, don't get me wrong.  Rumors of upcoming features and capabilities circulate, and Facebook certainly has a robust marketing department.  But Facebook and companies with similar deployment approaches have impressively made deployments a non-event.  And others are looking to follow suit, perhaps yours included.

Conceptual Impediments to Zero Day Deployments

If what I just said made you spit your drink at the screen, I understand.  Perhaps your deployment and release process takes so long that the thought of shrinking it to a day made you laugh.  Or perhaps it terrified.  Either way, I can understand that it may seem quite a leap.

You may conceive of Facebook and other practitioners so alien to your own situation that you see no path from here to there.  But in reality, they almost certainly do the same things you do as part of your longer process -- just optimized and automated.

Impediments take a variety of forms.  You might have lengthy quality assurance and vetting processes, perhaps that require many iterations between the developers and quality assurance.  You might still be packaging software onto DVDs and shipping it to customers.  Perhaps you run all sorts of checks and analytics on it.  But all will fall under the general heading of requiring manual intervention or consuming a lot of time.

To get to zero day deployments, you need to automate and speed up considerably, and this can seem daunting.

What's Common Today

Some good news exists, though.  The same forces that let the Visual Studio team see such radical improvement push on software shops across the board.  We all have access to helpful techs.

For instance, the overwhelming majority of organizations now have continuous integration via dedicated build machines.  Software developers commit code, and these things scoop it up, compile it, and package it up in a deployable package.  This activity now happens on the order of minutes whereas, in the past, I can remember shops where this was some poor guy's entire job, and he'd spend days on each build.

And, speaking of the CI server, a lot of them run automated test suites as part of what they do.  Most commonly, this means unit tests.  But they might also invoke acceptance tests and even more exotic things like smoke, GUI, and functionality tests.  You can thus accept commits, build the software, run a bunch of test, and get it ready to deploy.

Of course, you can also automate the actual deployment as well.  It stands to reason that, if your build machine can ball it up into a deliverable, it can deliver that deliverable.  This might be harder with physical media involved, but as more software deliveries happen over networks, more of them get automated.

What We Need Next

With all of that in place, why don't we have more zero day deployments?  What's missing?

Again, discounting the problem of physical media, I'd say quality checks present the biggest issue.  We can compile, run automated tests, and deploy automatically.  But does this guarantee acceptable production behavior?

What about the important element of code reviews?  How do you assure that, even as automated tests pass, the application isn't piling up mountains of technical debt and impeding future deployments?  To get to zero day deployments, we must address these issues.

Don't get me wrong.  Other things matter here as well.  Zero day deployments require robust production checks and sophisticated "oops, that didn't work, rollback!" capabilities.  But I think that nothing will matter more than automated quality checks.

Each time you commit code, you need an intelligent analysis of that code that should fail the build as surely as failing tests if issues crop up.  In a zero day deployment context, you cannot afford best practice violations.  You cannot afford slipping quality, mounting technical debt, and you most certainly cannot afford code rot.  Today's rot in a zero day deployment scenario means tomorrow's inability to deploy that way.

Learn more how CodeIt.Right can help you automate code reviews, improve your code quality, and reduce technical debt.

About the Author

Erik Dietrich

I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

posted on Monday, 23 January 2017 08:48:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
 Thursday, 12 January 2017

A little while back, I started a post series explaining some of the CodeIt.Right rules.  I led into the post with a narrative, which I won't retell.  But I will reiterate the two rules that I follow when it comes to static analysis tooling.

  • Never implement a suggested fix without knowing what makes it a fix.
  • Never ignore a suggested fix without understanding what makes it a fix.

Because I follow these two rules, I find myself researching every fix suggested to me by my tooling.  And, since I've gone to the trouble of doing so, I'll save you that same trouble by explaining some of those rules today.  Specifically, I'll examine 3 more CodeIt.Right rules today and explain the rationale behind them.

Mark assemblies CLSCompliant

If you develop in .NET, you've no doubt run across this particular warning at some point in your career.  Before we get into the details, let's stop and define the acronyms.  "CLS" stands for "Common Language Specification," so the warning informs you that you need to mark your assemblies "Common Language Specification Compliant" (or non-compliant, if applicable).

Okay, but what does that mean?  Well, you can easily forget that many programming languages target the .NET runtime besides your language of choice.  CLS compliance indicates that any language targeting the runtime can use your assembly.  You can write language specific code, incompatible with other framework languages.  CLS compliance means you haven't.

Want an example?  Let's say that you write C# code and that you decide to get cute.  You have a class with a "DoStuff" method, and you want to add a slight variation on it.  Because the new method adds improved functionality, you decide to call it "DOSTUFF" in all caps to indicate its awesomeness.  No problem, says the C# compiler.

And yet, if you you try to do the same thing in Visual Basic, a case insensitive language, you will encounter a compiler error.  You have written C# code that VB code cannot use.  Thus you have written non-CLS compliant code.  The CodeIt.Right rule exists to inform you that you have not specified your assembly's compliance or non-compliance.

To fix, go specify.  Ideally, go into the project's AssemblyInfo.cs file and add the following to call it a day.

[assembly:CLSCompliant(true)]

But you can also specify non-compliance for the assembly to avoid a warning.  Of course, you can do better by marking the assembly compliant on the whole and then hunting down and flagging non-compliant methods with the attribute.

Specify IFormatProvider

Next up, consider a warning to "specify IFormatProvider."  When you encounter this for the first time, it might leave you scratching your head.  After all, "IFormatProvider" seems a bit... technician-like.  A more newbie-friendly name for this warning might have been, "you have a localization problem."

For example, consider a situation in which some external supplies a date.  Except, they supply the date as a string and you have the task of converting it to a proper DateTime so that you can perform operations on it.  No problem, right?

var properDate = DateTime.Parse(inputString);

That should work, provided provincial concerns do not intervene.  For those of you in the US, "03/02/1995" corresponds to March 2nd, 1995.  Of course, should you live in Iraq, that date string would correspond to February 3rd, 1995.  Oops.

Consider a nightmare scenario wherein you write some code with this parsing mechanism.  Based in the US and with most of your customers in the US, this works for years.  Eventually, though, your sales group starts making inroads elsewhere.  Years after the fact, you wind up with a strange bug in code you haven't touched for years.  Yikes.

By specifying a format provider, you can avoid this scenario.

Nested types should not be visible

Unlike the previous rule, this one's name suffices for description.  If you declare a type within another type (say a class within a class), you should not make the nested type visible outside of the outer type.  So, the following code triggers the warning.

public class Outer
{
    public class Nested
    {

    }
}

To understand the issue here, consider the object oriented principle of encapsulation.  In short, hiding implementation details from outsiders gives you more freedom to vary those details later, at your discretion.  This thinking drives the rote instinct for OOP programmers to declare private fields and expose them via public accessors/mutators/properties.

To some degree, the same reasoning applies here.  If you declare a class or struct inside of another one, then presumably only the containing type needs the nested one.  In that case, why make it public?  On the other hand, if another type does, in fact, need the nested one, why scope it within a parent type and not just the same namespace?

You may have some reason for doing this -- something specific to your code and your implementation.  But understand that this is weird, and will tend to create awkward, hard-to-discover code.  For this reason, your static analysis tool flags your code.

Until Next Time

As I said last time, you can extract a ton of value from understanding code analysis rules.  This goes beyond just understanding your tooling and accepted best practice.  Specifically, it gets you in the habit of researching and understanding your code and applications at a deep, philosophical level.

In this post alone, we've discussed language interoperability, geographic maintenance concerns, and object oriented design.  You can, all too easily, dismiss analysis rules as perfectionism.  They aren't; they have very real, very important applications.

Stay tuned for more posts in this series, aimed at helping you understand your tooling.

Learn more how CodeIt.Right can help you automate code reviews and improve your code quality.

About the Author

Erik Dietrich

I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

posted on Thursday, 12 January 2017 10:32:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
 Tuesday, 03 January 2017

Last month, I wrote a post introducing you to T4 templates.  Near the end, I included a mention of GhostDoc's use of T4 templates in automatically generating code comments.  Today, I'd like to expand on that.

To recap very briefly, recall that Ghost Doc allows you to generate things like method header comments.  I recommend that, in most cases, you let it do its thing.  It does a good job.  But sometimes, you might have occasion to want to tweak the result.  And you can do that by making use of T4 Templates.

Documenting Chess TDD

To demonstrate, let's revisit my trusty toy code base, Chess TDD.  Because I put this code together for instructional purposes and not to release as a product, it has no method header comments for IntelliSense's benefit.  This makes it the perfect candidate for a demonstration.

If I had released this as a library, I'd have started the documentation with the Board class.  Most of the client interaction would happen via Board, so let's document that.  It offers you a constructor and a bunch of semantics around placing and moving pieces.  Let's document the conceptually simple MovePiece method.

public void MovePiece(BoardCoordinate origin, BoardCoordinate destination)
{
    VerifyCoordinatesOrThrow(origin, destination);

    var pieceToMove = GetPiece(origin);
    AddPiece(pieceToMove, destination);
    RemovePiece(origin);
    pieceToMove.HasMoved = true;

    ReconcileEnPassant(origin, destination, pieceToMove);
}

To add documentation to this method, I simply right click it and, from the GhostDoc context menu, select "Document This."  Alternatively, I can use the keyboard shortcut Ctrl-Shift-D.  Either option yields the following result.

/// <summary>
/// Moves the piece.
/// </summary>
/// <param name="origin">The origin.</param>
/// <param name="destination">The destination.</param>
public void MovePiece(BoardCoordinate origin, BoardCoordinate destination)
{
    VerifyCoordinatesOrThrow(origin, destination);

    var pieceToMove = GetPiece(origin);
    AddPiece(pieceToMove, destination);
    RemovePiece(origin);
    pieceToMove.HasMoved = true;

    ReconcileEnPassant(origin, destination, pieceToMove);
}

Let's Make a Tiny Tweak

Alright, much better!  If I scrutinize the comment, I can imagine what an IntelliSense-using client will see.  My parameter naming makes this conceptually simple to understand, so the IntelliSense will tell the user that the first parameter represents the origin square and the second parameter the destination.

But let's say that as I look at this, I find myself wanting to pick at a nit.  I don't care for the summary taking up three lines -- I want to condense it to one.  How might I do that?

Well, let's crack open the T4 template for generating a method header.  Recall that you do this in Visual Studio by selecting Tools->Ghost Doc->Options, and picking "Rules" from the options pane.

blog-intro-to-t4-templates-part2-1

If you double click on "Method Template", as highlighted above, you will see an "Edit Rule" Window.  The first few lines of code in that window look like this.

<#@ template language="C#" #>
<#  CodeElement codeElement = Context.CurrentCodeElement; #>
/// <summary>
///<# GenerateSummaryText(); #>
/// </summary>
<#    if(codeElement.HasTypeParameters) 
    {
        for(int i = 0; i < codeElement.TypeParameters.Length; i++) 
        { 
            TypeParameter typeParameter = codeElement.TypeParameters[i]; 
#>

Hmmm.  I cannot count myself an expert in T4 templates, per se, but I think I have an idea.  Let's put that call to GenerateSummaryText() inline between the summary tags.  Like this:

<#@ template language="C#" #>
<#  CodeElement codeElement = Context.CurrentCodeElement; #>
/// <summary><# GenerateSummaryText(); #></summary>

That should do it, right?  Let's regenerate the comment and see what it looks like.  This results in the following.

/// <summary>Moves the piece.
/// </summary>
/// <param name="origin">The origin.</param>
/// <param name="destination">The destination.</param>
public void MovePiece(BoardCoordinate origin, BoardCoordinate destination)
{
    VerifyCoordinatesOrThrow(origin, destination);

    var pieceToMove = GetPiece(origin);
    AddPiece(pieceToMove, destination);
    RemovePiece(origin);
    pieceToMove.HasMoved = true;

    ReconcileEnPassant(origin, destination, pieceToMove);
}

Uh, oh.  It made a difference, but somehow we only got halfway there.  Why might that be?

Diving Deeper

To understand, we need to look at the template in a bit more detail.  The template itself has everything on one line, and yet we see a newline in there somehow.  Could GenerateTextSummary cause this, somehow?  Let's scroll down to look at it.  Since this method has a lot of code, here are the first few lines only.

private void GenerateSummaryText()
{
    if(Context.HasExistingTagText("summary"))
    {
        this.WriteLine(Context.GetExistingTagText("summary"));
    }
    else if(IsAsyncMethod())
    {
        this.WriteLine(Context.ExecMacro("$(MethodName.Words.ExceptLast)") + " as an asynchronous operation.");
    }
    else if(IsMainMethod())
    {
        this.WriteLine("Defines the entry point of the application.");        
    }
}

Aha!  Notice that we're calling WriteLine.  What if we did a find and replace to change all of those to just Write?  Let's try.  (To do more serious operations like this, you will want to copy the text out of the editor and into your favorite text editor in order to get more operations).

Once you have replaced all instances of WriteLine with Write in the template, here is the new result.

/// <summary>Moves the piece.</summary>
/// <param name="origin">The origin.</param>
/// <param name="destination">The destination.</param>
public void MovePiece(BoardCoordinate origin, BoardCoordinate destination)
{
    VerifyCoordinatesOrThrow(origin, destination);

    var pieceToMove = GetPiece(origin);
    AddPiece(pieceToMove, destination);
    RemovePiece(origin);
    pieceToMove.HasMoved = true;

    ReconcileEnPassant(origin, destination, pieceToMove);
}

Success!

Validation

As you play with this, you might have noticed a "Validate" button in the rule editor.  Use this liberally!  This button will trigger a parsing of the template and provide you with feedback as to validity.  The last thing you want to do is work in here for many iterations and wind up with no idea what you broke and when.

When working with these templates, think of this as equivalent to compiling.  You wouldn't want to sit for 20 minutes writing code with no feedback as to whether it builds or not.  So don't do it with these templates.

The Power at Your Disposal

I'll wrap here for this particular lesson, but understand that we have barely scratched the surface of what you can do.  In this post, we just changed a bit of the formatting to suit a whim I had.  But you can really dive into ways of reasoning about and documenting the code if you so choose.

Stay tuned for future posts on more advanced tips and tricks with your comment templates.

Learn more about how GhostDoc can help simplify your XML Comments, produce and maintain quality help documentation.

About the Author

Erik Dietrich

I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

posted on Tuesday, 03 January 2017 10:47:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
 Monday, 26 December 2016

The v3.0 of CodeIt.Right v3 is here – the new major version of our automated code review and code quality analysis product. Here are the v3.0 new feature highlights:

  • VS2017 RC integration
  • Official support for VS2015 Update 3 and ASP.NET 5/ASP.NET Core 1.0 solutions
  • Solution filtering by date, source control status and file patterns
  • Summary report view - provides a summary view of the analysis results and metrics, customize to your needs
  • New Review Code commands – review opened files and review checked out files
  • Improved Profile Editor with advanced rule search and filtering
  • Improved look and feel for Violations Report and Editor violation markers
  • Setting to keep the OnDemand and Instant Review profiles in sync
  • New Jenkins integration plugin
  • Batch correction is now turned off by default
  • Most every CodeIt.Right action now can be assigned a keyboard shortcut
  • New rules

For the complete and detailed list of the v3.0 changes see What's New in CodeIt.Right v3.0


Solution Filtering

The solution filtering feature allows to narrow the code review scope to using the following options:

  • Analyze files modified Today/This Week/Last 2 Weeks/This Month – so you can set the relative date once and not have to change the date every day
  • Analyze files modified since specific date
  • Analyze files opened in Visual Studio tabs
  • Analyze files checked out from the source control
  • Analyze only specific files – only include the files that match a list of file patters like *Core*.cs or Modules\*. See this KB post for the file path patterns details and examples.

cir-v3-solution-filtering

New Review Code commands

We have changed the Start Analysis menu to Review Code – still the same feature and the new name is just highlighting the automated code review nature of the product. Also added the following Review Code commands:

  • Analyze Open Files menu - analyze only the files opened in Visual Studio tabs
  • Analyze Checked Out Files menu - analyze only files that that are checked out from the source control

cir-v3-profile-filterImproved Profile Editor

The Profile Editor now features

  • Advanced rule filtering by rule id, title, name, severity, scope, target, and programming language
  • Allows to quickly show only active, only inactive or all rules in the profile
  • Shows totals for the profile rules - total, active, and filtered
  • Improved adding rules with multiple categories

 

Summary Report

The Summary Report tab provides an overview of the analyzed source code quality, it includes the high level summary of the current analysis information, filters, violation summary, top N violation, solution info and metrics. Additionally it provides detailed list of violations and excludes.

The report is self-contained – no external dependencies, everything it requires is included within the html file. This makes it very easy to email the report to someone or publish it on the team portal – see example.

cir-v3-summary-report-part

The Summary Report is based on an ASP.NET Razor markup within the Summary.cshtml template. This makes it very easy for you to customize it to your needs.

You will find the summary report API documentation in the help file – CodeIt.Right –> Help & Support –> Help –> Summary Report API.

cir-v3-summary-source

 

How do I try it?

Download the v5.0 at http://submain.com/download/codeit.right/

Feedback is what keeps us going!

Let us know what you think of the new version here - http://submain.com/support/feedback/


Note to the CodeIt.Right v2 users
: The v2.x license codes won't work with the v3.0. For users with active Software Assurance subscription we have sent out the v3.x license codes. If you have not received or misplaced your new license, you can retrieve it on the My Account page. Users with expired Software Assurance subscription will need to purchase the new version - currently we are not offering upgrade path other than the Software Assurance subscription. For information about the upgrade protection see our Software Assurance and Support - Renewal / Reinstatement Terms

posted on Monday, 26 December 2016 09:12:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
 Tuesday, 29 November 2016

I've heard tell of a social experiment conducted with monkeys.  It may or may not be apocryphal, but it illustrates an interesting point.  So, here goes.

Primates and Conformity

A group of monkeys inhabited a large enclosure, which included a platform in the middle, accessible by a ladder.  For the experiment, their keepers set a banana on the platform, but with a catch.  Anytime a monkey would climb to the platform, the action would trigger a mechanism that sprayed the entire cage with freezing cold water.

The smarter monkeys quickly figured out the correlation and actively sought to prevent their cohorts from triggering the spray.  Anytime a monkey attempted to climb the ladder, they would stop it and beat it up a bit by way of teaching a lesson.  But the experiment wasn't finished.

Once the behavior had been established, they began swapping out monkeys.  When a newcomer arrived on the scene, he would go for the banana, not knowing the social rules of the cage.  The monkeys would quickly teach him, though.  This continued until they had rotated out all original monkeys.  The monkeys in the cage would beat up the newcomers even though they had never experienced the actual negative consequences.

Now before you think to yourself, "stupid monkeys," ask yourself how much better you'd fare.  This video shows that humans have the same instincts as our primate cousins.

Static Analysis and Conformity

You might find yourself wondering why I told you this story.  What does it have to do with software tooling and static analysis?

Well, I find that teams tend to exhibit two common anti-patterns when it comes to static analysis.  Most prominently, they tune out warnings without due diligence.  After that, I most frequently see them blindly implement the suggestions.

I tend to follow two rules when it comes to my interaction with static analysis tooling.

  • Never implement a suggested fix without knowing what makes it a fix.
  • Never ignore a suggested fix without understanding what makes it a fix.

You syllogism buffs out there have, no doubt, condensed this to a single rule.  Anytime you encounter a suggested fix you don't understand, go learn about it.

Once you understand it, you can implement the fix or ignore the suggestion with eyes wide open.  In software design/architecture, we deal with few clear cut rules and endless trade-offs.  But you can't speak intelligently about the trade-offs without knowing the theory behind them.

Toward that end, I'd like to facilitate that warning for some CodeIt.Right rules today.  Hopefully this helps you leverage your tooling to its full benefit.

Abstract types should not have public constructors

First up, consider the idea of abstract types with public constructors.

public abstract class Shape
{
    protected ConsoleColor _color;

    public Shape(ConsoleColor color)
    {
        _color = color;
    }
}

public class Square : Shape
{
    public int SideLength { get; set; }
    public Square(ConsoleColor color) : base(color) { }

}

CodeIt.Right will ding you for making the Shape constructor public (or internal -- it wants protected).  But why?

Well, you'll quickly discover that CodeIt.Right has good company in the form of the .NET Framework guidelines and FxCop rules.  But that just shifts the discussion without solving the problem.  Why does everyone seem not to like this code?

First, understand that you cannot instantiate Shape, by design.  The "abstract" designation effectively communicates Shape's incompleteness.  It's more of a template than a finished class in that creating a Shape makes no sense without the added specificity of a derived type, like Square.

So the only way classes outside of the inheritance hierarchy can interact with Shape indirectly, via Square.  They create Squares, and those Squares decide how to go about interacting with Shape.  Don't believe me?  Try getting around this.  Try creating a Shape in code or try deleting Square's constructor and calling new Square(color).  Neither will compile.

Thus, when you make Shape's constructor public or internal, you invite users of your inheritance hierarchy to do something impossible.  You engage in false advertising and you confuse them.  CodeIt.Right is helping you avoid this mistake.

Do not catch generic exception types

Next up, let's consider the wisdom, "do not catch generic exception types."  To see what that looks like, consider the following code.

public bool MergeUsers(int user1Id, int user2Id)
{
    try
    {
        var user1 = _userRepo.Get(user1Id);
        var user2 = _userRepo.Get(user2Id);
        user1.MergeWith(user2);
        _userRepo.Save(user1);
        _userRepo.Delete(user2);
        return true;
    }
    catch(Exception ex)
    {
        _logger.Log($"Exception {ex.Message} occurred.");
        return false;
    }
}

Here we have a method that merges two users together, given their IDs.  It accomplishes this by fetching them from some persistence ignorance scheme, invoking a merge operation, saving the merged one and deleting the vestigial one.  Oh, and it wraps the whole thing in a try block, and then logs and returns false should anything fail.

And, by anything, I mean absolutely anything.  Business rules make merge impossible?  Log and return false.  Server out of memory?  Log it and return false.  Server hit by lightning and user data inaccessible?  Log it and return false.

With this approach, you encounter two categories of problem.  First, you fail to reason about or distinguish among the different things that might go wrong.  And, secondly, you risk overstepping what you're equipped to handle here.  Do you really want to handle fatal system exceptions right smack in the heart of the MergeUsers business logic?

You may encounter circumstances where you want to handle everything, but probably not as frequently as you think.  Instead of defaulting to this catch all, go through the exercise of reasoning about what could go wrong here and what you want to handle.

Avoid language specific type names in parameters

If you see this violation, you probably have code that resembles the following.  (Though, hopefully, you wouldn't write this actual method)

public int Add(int xInt, int yInt)
{
    return xInt + yInt;
}

CodeIt.Right does not like the name "int" in the parameters and this reflects a .NET Framework guideline.

Here, we find something a single language developer may not stop to consider.  Specifically, not all languages that target the .NET framework use the same type name conveniences.  You say "int" and a VB developer says "Integer."  So if a VB developer invokes your method from a library, she may find this confusing.

That said, I would like to take this one step further and advise that you avoid baking types into your parameter/variable names in general.  Want to know why?  Let's consider a likely outcome of some project manager coming along and saying, "we want to expand the add method to be able to handle really big numbers."  Oh, well, simple enough!

public long Add(long xInt, long yInt)
{
    return xInt + yInt;
}

You just needed to change the datatypes to long, and voilà!  Everything went perfectly until someone asked you at code review why you have a long called "xInt."  Oops.  You totally didn't even think about the variable names.  You'll be more careful next time.  Well, I'd advise avoiding "next time" completely by getting out of this naming habit.  The IDE can tell you the type of a variable -- don't encode it into the name redundantly.

Until Next Time

As I said in the introductory part of the post, I believe huge value exists in understanding code analysis rules.  You make better decisions, have better conversations, and get more mileage out of the tooling.  In general, this understanding makes you a better developer.  So I plan to continue with these explanatory posts from time to time.  Stay tuned!

Learn more how CodeIt.Right can help you automate code reviews and improve your code quality.

About the Author

Erik Dietrich

I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

posted on Tuesday, 29 November 2016 09:55:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
 Tuesday, 22 November 2016

Today, I'd like to tackle a subject that inspires ambivalence in me.  Specifically, I mean the subject of automated text generation (including a common, specific flavor: code generation).

If you haven't encountered this before, consider a common example.  When you file->new->(console) project, Visual Studio generates a Program.cs file.  This file contains standard includes, a program class, and a public static void method called "Main."  Conceptually, you just triggered text (and code) generation.

Many schemes exist for doing this.  Really, you just need a templating scheme and some kind of processing engine to make it happen.  Think of ASP MVC, for instance.  You write markup sprinkled with interpreted variables (i.e. Razor), and your controller object processes that and spits out pure HTML to return as the response.  PHP and other server side scripting constructs operate this way and so do code/text generators.

However, I'd like to narrow the focus to a specific case: T4 templates.  You can use this powerful construct to generate all manner of text.  But use discretion, because you can also use this powerful construct to make a huge mess.  I wrote a post about the potential perils some years back, but suffice it to say that you should take care not to automate and speed up copy and paste programming.  Make sure your case for use makes sense.

The Very Basics

With the obligatory disclaimer out of the way, let's get down to brass tacks.  I'll offer a lightning fast getting started primer.

Open some kind of playpen project in Visual Studio, and add a new item.  You can find the item in question under the "General" heading as "Text Template."

blog-intro-to-t4-templates-part1-1

Give it a name.  For instance, I called mine "sample" while writing this post.  Once you do that, you will see it show up in the root directory of your project as Sample.tt.  Here is the text that it contains.

<#@ template debug="false" hostspecific="false" language="C#" #>
<#@ assembly name="System.Core" #>
<#@ import namespace="System.Linq" #>
<#@ import namespace="System.Text" #>
<#@ import namespace="System.Collections.Generic" #>
<#@ output extension=".txt" #>

Save this file.  When you do so, Visual Studio will prompt you with a message about potentially harming your computer, so something must be happening behind the scenes, right?  Indeed, something has happened.  You have generated the output of the T4 generation process.  And you can see it by expanding the caret next to your Sample.tt file as shown here.

blog-intro-to-t4-templates-part1-2

If you open the Sample.txt file, however, you will find it empty.  That's because we haven't done anything interesting yet.  Add a new line with the text "hello world" to the bottom of the Sample.tt file and then save.  (And feel free to get rid of that message about harming your computer by opting out, if you want).  You will now see a new Sample.txt file containing the words "hello world."

Beyond the Trivial

While you might find it satisfying to get going, what we've done so far could be accomplished with file copy.  Let's take advantage of T4 templating in earnest.  First up, observe what happens when you change the output extension.  Make it something like .blah and observe that saving results in Sample.blah.  As you can see, there's more going on than simple text duplication.  But let's do something more interesting.

Update your Sample.tt file to contain the following text and then click save.

<#@ template debug="false" hostspecific="false" language="C#" #>
<#@ assembly name="System.Core" #>
<#@ import namespace="System.Linq" #>
<#@ import namespace="System.Text" #>
<#@ import namespace="System.Collections.Generic" #>
<#@ output extension=".txt" #>
<#
for(int i = 0; i < 10; i++)
    WriteLine($"Hello World {i}");
#>

When you open Sample.txt, you will see the following.

Hello World 0
Hello World 1
Hello World 2
Hello World 3
Hello World 4
Hello World 5
Hello World 6
Hello World 7
Hello World 8
Hello World 9

Pretty neat, huh?  You've used the <# #> tokens to surround first class C# that you can use to generate text.  I imagine you can see the potential here.

Oh, and what happens when you type malformed C#?  Remove the semicolon and see for yourself.  Yes, Visual Studio offers you feedback about bad T4 template files.

Use Cases

I'll stop here with the T4 tutorial.  After all, I aimed only to provide an introduction.  And I think that part of any true introduction involves explaining where and how the subject might prove useful to readers.  So where do people reasonably use these things?

Perhaps the most common usage scenario pertains to ORMs and the so-called impedance mismatch problem.  People create code generation schemes that examine databases and spit out source code that matches with them.  This approach spares the significant performance hit of some kind of runtime scheme for figuring this out, but without forcing tedious typing on dev teams.  Entity Framework makes use of T4 templates.

I have seen other uses as well, however.  Perhaps your organization puts involved XML configuration files into any new projects and you want to generate these without copy and paste.  Or, perhaps you need to replace an expensive reflection/runtime scheme for performance reasons.  Maybe you have a good bit of layering boilerplate and object mapping to do.  Really, the sky is the limit here, but always bear in mind the caveat that I offered at the beginning of this post.  Take care not to let code/text generation be a crutch for cranking out anti-patterns more rapidly.

The GhostDoc Use Case

I will close by offering a tie-in with the GhostDoc offering as the final use case.  If you use GhostDoc to generate comments for methods and types in your codebase, you should know that you can customize the default generations using T4 templates.  (As an aside, I consider this a perfect use case for templating -- a software vendor offering a product to developers that assists them with writing code.)

If you open GhostDoc's options pane and navigate to "Rules" you will see the following screen.  Double clicking any of the templates will give you the option to edit them, customizing as you see fit.

blog-intro-to-t4-templates-part1-3

You can thus do simple things, like adding some copyright boilerplate, for instance.  Or you could really dive into the weeds of the commenting engine to customize to your heart's content (be careful here, though).  You can exert a great deal of control.

T4 templates offer you power and can make your life easier when used judiciously.  They're definitely a tool worth having in your tool belt.  And, if you make use of GhostDoc, this is doubly true.

Learn more about how GhostDoc can help simplify your XML Comments, produce and maintain quality help documentation.

About the Author

Erik Dietrich

I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

posted on Tuesday, 22 November 2016 09:23:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
 Monday, 21 November 2016

Version 5.4 of GhostDoc is a maintenance update for the v5.0 users:

  • VS2017 RC integration
  • New menu items - Getting Started Tutorial and Tutorials and Resources
  • (Pro) (Ent) Edit buttons in Options - Solution Ignore List and Options - Spelling Ignore List
  • (Pro) (Ent) Test button in Options - Solution Ignore List
  • (Ent) Now GhostDoc shows error message when Conceptual Content path is invalid in the solution configuration file
  • Fixed PathTooLongException exception when generating preview/build help file for C++ projects
  • (Ent) Updated ClassLibrary1.zip, moved all conceptual content files inside the project in GhostDoc Enterprise\Samples\Conceptual Content\
  • Improved documenting ReadOnly auto-properties in VB
  • Resolved issue re-documenting a type at the top of source code file in VB
  • Resolved issue with generating preview of the tag for generics in VB

For the complete list of changes, please see What's New in GhostDoc v5

For overview of the v5.0 features, visit Overview of GhostDoc v5.0 Features

Download the new build at http://submain.com/download/ghostdoc/

posted on Monday, 21 November 2016 09:15:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
 Thursday, 17 November 2016

We have just made available the Release Candidate of CodeIt.Right v3.0, here is the new feature highlights:

  • VS2017 RC integration
  • Solution filtering by date, source control status and file patterns
  • Summary report view (announced as the Dashboard in the Beta preview) - provides a summary view of the analysis results and metrics, customize to your needs

These features were announced as part of our recent v3 Beta:

  • Official support for VS2015 Update 2 and ASP.NET 5/ASP.NET Core 1.0 solutions
  • New Review Code commands:
    • only opened files
    • only checked out files
    • only files modified after specific date
  • Improved Profile Editor with advanced rule search and filtering
  • Improved look and feel for Violations Report and Editor violation markers
  • New rules
  • Setting to keep the OnDemand and Instant Review profiles in sync
  • New Jenkins integration plugin
  • Batch correction is now turned off by default
  • Most every CodeIt.Right action now can be assigned a keyboard shortcut
  • For the Beta changes and screenshots, please see Overview of CodeIt.Right v3.0 Beta Features

For the complete and detailed list of the v3.0 changes see What's New in CodeIt.Right v3.0

To give the v3.0 Release Candidate a try, download it here - http://submain.com/download/codeit.right/beta/


Solution Filtering

In addition to the solution filtering by modified since specific date, open and checked out files available in the Beta, we are introducing few more options:

  • Analyze files modified Today/This Week/Last 2 Weeks/This Month – so you can set the relative date once and not have to change the date every day
  • Analyze only specific files – only include the files that match a list of file patters like *Core*.cs or Modules\*. See this KB post for the file path patterns details and examples.

cir-v3-solution-filtering

Summary Report

The Summary Report tab provides an overview of the analyzed source code quality, it includes the high level summary of the current analysis information, filters, violation summary, top N violation, solution info and metrics. Additionally it provides detailed list of violations and excludes.

The report is self-contained – no external dependencies, everything it requires is included within the html file. This makes it very easy to email the report to someone or publish it on the team portal – see example.

cir-v3-summary-report-part

The Summary Report is based on an ASP.NET Razor markup within the Summary.cshtml template. This makes it very easy for you to customize it to your needs.

You will find the summary report API documentation in the help file – CodeIt.Right –> Help & Support –> Help –> Summary Report API.

cir-v3-summary-source

 

Feedback

We would love to hear your feedback on the new features! Please email it to us at support@submain.com or post in the CodeIt.Right Forum.

posted on Thursday, 17 November 2016 08:55:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
 Wednesday, 16 November 2016

We are looking for your input and we're willing to bribe you for answering one very simple question: What are your biggest code documentation challenges right now?

The survey is super-quick and we're offering a $20 discount code for your time (good with any new SubMain product license purchase) that you will automatically receive once you complete the survey as our thank you.

Take the Survey

We'd also appreciate it if you'd help us out by tweeting about this using the link Share on Twitter or otherwise letting folks know we're interested to know their code documentation challenges.

Thanks for your help!

posted on Wednesday, 16 November 2016 09:23:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
 Saturday, 05 November 2016
blog-so-you’ve-inherited-a-legacy-codebase

During my younger days, I worked for a company that made a habit of a strategic acquisition.  They didn't participate in Time Warner style mergers, but periodically they would purchase a smaller competitor or a related product.  And on more than one occasion, I inherited the lead role for the assimilating software from one of these organizations.  Lucky me, right?

If I think in terms of how to describe this to someone, a plumbing analogy comes to mind.  Over the years, I have learned enough about plumbing to handle most tasks myself.  And this has exposed me to the irony of discovering a small leak in a fitting plugged by grit or debris.  I find this ironic because two wrongs make a right.  A dirty, leaky fitting reaches sub-optimal equilibrium, and you spring a leak when you clean it.

Legacy codebases have this issue as well.  You inherit some acquired codebase, fix a tiny bug, and suddenly the defect floodgates open.  And then you realize the perilousness of your situation.

While you might not have come by it in the same way that I did, I imagine you can relate.  At some point or another, just about every developer has been thrust into supporting some creaky codebase.  How should you handle this?

Put Your Outrage in Check

First, take some deep breaths.  Seriously, I mean it.  As software developers, we seem to hate code written by others.  In fact, we seem to hate our own code if we wrote it more than a few months ago.  So when you see the legacy codebase for the first time, you will feel a natural bias toward disgust.

But don't indulge it.  Don't sit there cursing the people that wrote the code, and don't take screenshots to send to the Daily WTF.  Not only will it do you no good, but I'd go so far as to say that this is actively counterproductive.  Deciding that the code offers nothing worth salvaging makes you less inclined to try to understand it.

The people that wrote this code dealt with older languages, older tooling, older frameworks, and generally less knowledge than we have today.  And besides, you don't know what constraints they faced.  Perhaps bosses heaped delivery pressure on them like crazy.  Perhaps someone forced them to convert to writing in a new, unfamiliar language.  Whatever the case may be, you simply didn't walk in their shoes.  So take a breath, assume they did their best, and try to understand what you have under the hood.

Get a Visualization of the Architecture

Once you've settled in mentally for this responsibility, seek to understand quickly.  You won't achieve this by cracking open the code and looking through random source files.  But, beyond that, you also won't achieve it by looking at their architecture documents or folder structures.  Reality gets out of sync with intention, and those things start to lie.  You need to see the big picture, but in a way that lines up with reality.

Look for tools that map dependencies and can generate a visual of the codebase.  Plenty of these tools exist for you and can automate visual depictions.  Find one and employ it.  This will tell you whether the architecture resembles the neat diagram given to you or not.  And, more importantly, it will get you to a broad understanding much more quickly.

Characterize

Once you have the picture you need of the codebase and the right frame of mind, you can start doing things to it.  And the first thing you should do is to start writing characterization tests.

If you have not heard of them before, characterization tests have the purpose of, well, characterizing the codebase.  You don't worry about correct or incorrect behaviors.  Instead, you accept at face value what the code does, and document those behaviors with tests.  You do this because you want to get a safety net in place that tells you when your changes affect inputs and outputs.

As this XKCD cartoon ably demonstrates, someone will come to depend on the application's production behavior, however problematic.  So with legacy code, you cannot simply decide to improve a behavior and assume your users will thank you.  You need to exercise caution.

But characterization tests do more than just provide a safety net.  As an exercise, they help you develop a deeper understanding of the codebase.  If the architectural visualization gives you a skeleton understanding, this starts to put meat on the bones.

Isolate Problems

With a reliable safety net in place, you can begin making strategic changes to the production code beyond simple break/fix.  I recommend that you start by finding and isolating problematic chunks of code.  In essence, this means identifying sources of technical debt and looking to improve, gradually.

This can mean pockets of global state or extreme complexity that make for risky change.  But it might also mean dependencies on outdated libraries, frameworks, or APIs.  In order to extricate yourself from such messes, you must start to isolate them from business logic and important plumbing code.  Once you have it isolated, fixes will come more easily.

Evolve Toward Modernity

Once you've isolated problematic areas and archaic dependencies, it certainly seems logical to subsequently eliminate them.  And, I suggest you do just that as a general rule.  Of course, sometimes isolating them gives you enough of a win since it helps you mitigate risk.  But I would consider this the exception and not the rule.  You want to remove problem areas.

I do not say this idly nor do I say it because I have some kind of early adopter drive for the latest and greatest.  Rather, being stuck with old tooling and infrastructure prevents you from taking advantage of modern efficiencies and gains.  When some old library prevents you from upgrading to a more modern language version, you wind up writing more, less efficient code.  Being stuck in the past will cost you money.

The Fate of the Codebase

As you get comfortable and take ownership of the legacy codebase, never stop contemplating its fate.  Clearly, in the beginning, someone decided that the application's value outweighed its liability factor, but that may not always continue to be true.  Keep your finger on the pulse of the codebase, while considering options like migration, retirement, evolution, and major rework.

And, finally, remember that taking over a legacy codebase need not be onerous.  As initially shocked as I found myself with the state of some of those acquisitions, some of them turned into rewarding projects for me.  You can derive a certain satisfaction from taking over a chaotic situation and gradually steer it toward sanity.  So if you find yourself thrown into this situation, smile, roll up your sleeves, own it and make the best of it.

Related resources

Tools at your disposal

SubMain offers CodeIt.Right that easily integrates into Visual Studio for flexible and intuitive automated code review solution that works real-time, on demand, at the source control check-in or as part of your build.

Learn more how CodeIt.Right can identify technical debt, document it and gradually improve the legacy code.

About the Author

Erik Dietrich

I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

    posted on Saturday, 05 November 2016 10:43:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Tuesday, 25 October 2016

    If you spend enough years writing software, sooner or later, your chosen vocation will force you into reverse engineering.  Some weird API method with an inscrutable name will stymie you.  And you'll have to plug in random inputs and examine the outputs to figure out what it does.

    blog-elements-of-helpful-code-documentationClearly, this wastes your time.  Even if you enjoy the detective work, you can't argue that an employer or client would view this as efficient.  Library and API code should not require you to launch a mystery investigation to determine what it does.

    Instead, such code should come with appropriate documentation.  This documentation should move your focus from wondering what the code does to contemplating how best to leverage it.  It should make your life easier.

    But what constitutes appropriate documentation?  What particular characteristics does it have?  In this post, I'd like to lay out some elements of helpful code documentation.

    Elements of Style

    Before moving on to what the documentation should contain, I will speak first about its stylistic properties.  After all, poorly written documentation can tank understanding, even if it theoretically contains everything it should.  If you're going to write it, make it good.

    Now don't get me wrong -- I'm not suggesting you should invest enough time to make it a literary masterpiece.  Instead, focus on three primary characteristics of good writing: clarity, correctness, and precision.  You want to make sure that readers understand exactly what you're talking about.  And, obviously, you cannot get anything wrong.

    The importance of this goes beyond just the particular method in question.  It affects your entire credibility with your userbase.  If you confuse them with ambiguity or, worse, get something wrong, they will start to mistrust you.  The documentation becomes useless to them and your reputation suffers.

    Examples

    Once you've gotten your house in order with stylistic concerns in the documentation, you can decide on what to include.  First up, I cannot overstate the importance of including examples.

    Whether you find yourself documenting a class, a method, a web service call, or anything else, provide examples.  Show the users the code in action and let them apply their pattern matching and deduction skills.  In case you hadn't noticed, programmers tend to have these in spades.

    Empathize with the users of your code.  When you find yourself reading manuals and documentation, don't you look for examples?  Don't you prefer to grab them and tweak them to suit your current situation?  So do the readers of your documentation.  Oblige them. (See <example />)

    Conditions

    Next up, I'll talk about the general consideration of "conditions."  By this, I mean three basic types of conditions: preconditions, postconditions, and invariants.

    Let me define these in broad terms so that you understand what I mean.  Respectively, preconditions, postconditions, and invariants are things that must be true before your code executes, things that must be true after it executes, and things that must remain true throughout.

    Documenting this information for your users saves them trial and error misery.  If you leave this out, they may have to discover for themselves that the method won't accept a null parameter or that it never returns a positive number.  Spare them that trial and error experimentation and make this clear.  By telling them explicitly, you help them determine up front whether this code suits their purpose or not. (See <remarks /> and <note />)

    Related Elements

    Moving out from core principles a bit, let's talk about some important meta-information.  People don't always peruse your documentation in "lookup" mode, wanting help about a code element whose name they already know.  Instead, sometimes they will 'surf' the documentation, brainstorming the best way to tackle a problem.

    For instance, imagine that you want to design some behavior around a collection type.  Familiar with List, you look that up, but then maybe you poke around to see what inherits from the same base or implements the same interface.  By doing this, you hope to find the perfect collection type to suit your needs.

    Make this sort of thing easy on readers of your documentation by offering a concept of "related" elements.  Listing OOP classes in the same hierarchy represents just one example of what you might do.  You can also list all elements with a similar behavior or a similar name.  You will have to determine for yourself what related elements make sense based on context.  Just make sure to include them, though. (See <seealso /> )

    Pitfalls and Gotchas

    Last, I'll mention an oft-overlooked property of documentation.  Most commonly, you might see this when looking at the documentation for some API call.  Often, it takes the form of "exceptions thrown" or "possible error codes."

    But I'd like to generalize further here to "pitfalls and gotchas."  Listing out error codes and exceptions is great because it lets users know what to expect when things go off the rails.  But these aren't the only ways that things can go wrong, nor are they the only things of which users should be aware.

    Take care to list anything out here that might violate the principle of least surprise or that could trip people up.  This might include things like, "common ways users misuse this method" or "if you get output X, check that you set Y correctly."  You can usually populate this section pretty easily whenever a user struggles with the documentation as-is.

    Wherever you get the pitfalls, just be sure to include them.  Believe it or not, this kind of detail can make the difference between adequate and outstanding documentation.  Few things impress users as much as you anticipating their questions and needs. (See <exception />, <returns /> and <remarks />)

    Documentation Won't Fix Bad Code

    In closing, I would like to offer a thought that returns to the code itself.  Writing good documentation is critically important for anyone whose code will be consumed by others -- especially those selling their code.  But it all goes for naught should you write bad or buggy code, or should your API present a mess to your users.

    Thus I encourage you to apply the same scrutiny to the usability of your API that I have just encouraged you to do for your documentation.  Look to ensure that you offer crisp, clear abstractions.  Name code elements appropriately.  Avoid surprises to your users.

    Over the last decade or so, organizations like Apple have moved us away from hefty user manuals in favor of "discoverable" interfaces.  Apply the same principle to your code.  I tell you this not to excuse you from documentation, but to help you make your documentation count.  When your clean API serves as part of your documentation, you will write less of it, and what you do write will have higher value to readers.

    Learn more about how GhostDoc can help simplify your XML Comments, produce and maintain quality help documentation.

    About the Author

    Erik Dietrich

    I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

    posted on Tuesday, 25 October 2016 10:53:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     

     
         
     
    Home |  Products |  Services |  Download |  Purchase |  Support |  Community |  About Us |