Have a question? Email Us or Call 1 (800) 936-2134
SubMain - CodeIt.Right The First Time! Home Products Services Download Purchase Support Community
 Monday, 30 January 2017

For years, I can remember fighting the good fight for unit testing.  When I started that fight, I understood a simple premise.  We, as programmers, automate things.  So, why not automate testing?

Of all things, a grad school course in software engineering introduced me to the concept back in 2005.  It hooked me immediately, and I began applying the lessons to my work at the time.  A few years and a new job later, I came to a group that had not yet discovered the wonders of automated testing.  No worries, I figured, I can introduce the concept!

Except, it turns out that people stuck in their ways kind of like those ways.  Imagine my surprise to discover that people turned up their nose at the practice.  Over the course of time, I learned to plead my case, both in technical and business terms.  But it often felt like wading upstream against a fast moving current.

Years later, I have fought that fight over and over again.  In fact, I've produced training materials, courses, videos, blog posts, and books on the subject.  I've brought people around to see the benefits and then subsequently realize those benefits following adoption.  This has brought me satisfaction.

But I don't do this in a vacuum.  The industry as a whole has followed the same trajectory, using the same logic.  I count myself just another advocate among a euphony of voices.  And so our profession has generally come to accept unit testing as a vital tool.

Widespread Acceptance of Automated Regression Tests

In fact, I might go so far as to call acceptance and adoption quite widespread.  This figure only increases if you include shops that totally mean to and will definitely get around to it like sometime in the next six months or something.  In other words, if you count both shops that have adopted the practice and shops that feel as though they should, acceptance figures certainly span a plurality.

Major enterprises bring me in to help them teach their developers to do it.  Still, other companies consult and ask questions about it.  Just about everyone wants to understand how to realize the unit testing value proposition of higher quality, more stability, and fewer bugs.

This takes a simple form.  We talk about unit testing and other forms of testing, and sometimes this may blur the lines.  But let's get specific here.  A holistic testing strategy includes tests at a variety of granularities.  These comprise what some call "the test pyramid."  Unit tests address individual components (e.g. classes), while service tests drive at the way the components of your application work together.  GUI tests, the least granular of all, exercise the whole thing.

Taken together, these comprise your regression test suite.  It stands against the category of bugs known as "regressions," or defects where something that used to work stops working.  For a parallel example in the "real world" think of the warning lights on your car's dashboard.  "Low battery" light comes on because the battery, which used to work, has stopped working.

Benefits of Automated Regression Test Suites

Why do this?  What benefits to automated regression test suites provide?  Well, let's take a look at some.

  • Repeatability and accuracy.  A human running tests over and over again may produce slight variances in the tests.  A machine, not so much.
  • Speed.  As with anything, automation produces a significant speedup over manual execution.
  • Fast feedback.  The automated test suite can tell you much more quickly if you have broken something.
  • Morale.  The fewer times a QA department comes back with "you broke this thing," the fewer opportunities for contentiousness.

I should also mention, as a brief aside, that I don't consider automated test suites to be acceptable substitutes for manual testing.  Rather, I believe the two efforts should work in complementary fashion.  If the automated test suite executes the humdrum tests in the codebase, it frees QA folks up to perform intelligent, exploratory testing.  As Uncle Bob once famously said, "it's wrong to turn humans into machines.  If you can write a script for a test procedure, then you can write a program to execute that procedure."

Automating Code Review

None of this probably comes as much of a shock to you.  If you go out and read tech blogs, you've no doubt encountered the widespread opinion that people should automate regression test suites.  In fact, you probably share that opinion.  So don't you wonder why we don't more frequently apply that logic to other concerns?

Take code review, for instance.  Most organizations do this in entirely manual fashion outside of, perhaps, a so-called "linting" tool.  They mandate automated test coverage and then content themselves with sicking their developers on one another in meetings to gripe over tabs, spaces, and camel casing.

Why not approach code review the same way?  Why not automate the aspects of it that lend themselves to automation, while saving human intervention for more conceptual matters?

Benefits of Automated Code Reviews

In a study by Steve McConnell and referenced in this blog post, "formal code inspections" produced better results for preemptively finding bugs than even automated regression tests.  So it stands to reason that we should invest in code review in the same ways that we invest in regression testing.  And I don't mean simply time spent, but in driving forward with automation and efficiency.

Consider the benefits I listed above for automated tests, and look how they apply to automated code review.

  • Repeatability and accuracy.  Humans will miss instances of substandard code if they feel tired -- machines won't.
  • Speed.  Do you want your code review to take seconds or in hours/days.
  • Fast feedback.  Because of the increased speed of the review, the reviewee gets the results immediately after writing the code, for better learning.
  • Morale.  The exact same reasoning applies here.  Having a machine point out your mistakes can save contentiousness.

I think that we'll see a similar trajectory to automating code review that we did with automating test suites.  And, what's more, I think that automated code review will gain steam a lot more quickly and with less resistance.  After all, automating QA activities blazed a trail.

I believe the biggest barrier to adoption, in this case, is the lack of awareness.  People may not believe automating code review is possible.  But I assure you, you can do it.  So keep an eye out for ways to automate this important practice, and get in ahead of the adoption curve.

Related resources

Tools at your disposal

SubMain offers CodeIt.Right that easily integrates into Visual Studio for flexible and intuitive automated code review solution that works real-time, on demand, at the source control check-in or as part of your build.

Learn more how CodeIt.Right can help you automate code reviews and improve your code quality.

About the Author

Erik Dietrich

I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

posted on Monday, 30 January 2017 15:52:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
 Monday, 23 January 2017

As a teenager, I remember having a passing interest in hacking.  Perhaps this came from watching the movie Sneakers.  Whatever the origin, the fancy passed quickly because I prefer building stuff to breaking other people's stuff.  Therefore, what I know about hacking pretty much stops at understanding terminology and high level concepts.

Consider the term "zero day exploit," for instance.  While I understand what this means, I have never once, in my life, sat on discovery of a software vulnerability for the purpose of using it somehow.  Usually when I discover a bug, I'm trying to deposit a check or something, and I care only about the inconvenience.  But I still understand the term.

"Zero day" refers to the amount of time the software vendor has to prepare for the vulnerability.  You see, the clever hacker gives no warning about the vulnerability before using it.  (This seems like common sense, though perhaps hackers with more derring do like to give them half a day to watch them scramble to release something before the hack takes effect.)  The time between announcement and reality is zero.

Increased Deployment Cadence

Let's co-opt the term "zero day" for a different purpose.  Imagine that we now use it to refer to software deployments.  By "zero day deployment," we thus mean "software deployed without any prior announcement."

blog-are-you-ready-for-zero-day-software-deploymentBut why would anyone do this?  Don't you miss out on some great marketing opportunities?  And, more importantly, can you even release software this quickly?  Understanding comes from realizing that software deployment is undergoing a radical shift.

To understand this think about software release cadences 20 years ago.  In the 90s, Internet Explorer won the first browser war because it managed to beat Netscape's plodding release of going 3 years between releases.  With major software products, release cadences of a year or two dominated the landscape back then.

But that timeline has shrunk steadily.  For a highly visible example, consider Visual Studio.  In 2002, 2005, 2008, Microsoft released versions corresponding to those years.  Then it started to shrink with 2010, 2012, and 2013.  Now, the years no longer mark releases, per se, with Microsoft actually releasing major updates on a quarterly basis.

Zero Day Deployments

As much as going from "every 3 years" to "every 3 months" impresses, websites and SaaS vendors have shrunk it to "every day."  Consider Facebook's deployment cadence.  They roll minor updates every business day and major ones every week.

With this cadence, we truly reach zero day deployment.  You never hear Facebook announcing major upcoming releases.  In fact, you never hear Facebook announcing releases, period.  The first the world sees of a given Facebook release is when the release actually happens.  Truly, this means zero day releases.

Oh, don't get me wrong.  Rumors of upcoming features and capabilities circulate, and Facebook certainly has a robust marketing department.  But Facebook and companies with similar deployment approaches have impressively made deployments a non-event.  And others are looking to follow suit, perhaps yours included.

Conceptual Impediments to Zero Day Deployments

If what I just said made you spit your drink at the screen, I understand.  Perhaps your deployment and release process takes so long that the thought of shrinking it to a day made you laugh.  Or perhaps it terrified.  Either way, I can understand that it may seem quite a leap.

You may conceive of Facebook and other practitioners so alien to your own situation that you see no path from here to there.  But in reality, they almost certainly do the same things you do as part of your longer process -- just optimized and automated.

Impediments take a variety of forms.  You might have lengthy quality assurance and vetting processes, perhaps that require many iterations between the developers and quality assurance.  You might still be packaging software onto DVDs and shipping it to customers.  Perhaps you run all sorts of checks and analytics on it.  But all will fall under the general heading of requiring manual intervention or consuming a lot of time.

To get to zero day deployments, you need to automate and speed up considerably, and this can seem daunting.

What's Common Today

Some good news exists, though.  The same forces that let the Visual Studio team see such radical improvement push on software shops across the board.  We all have access to helpful techs.

For instance, the overwhelming majority of organizations now have continuous integration via dedicated build machines.  Software developers commit code, and these things scoop it up, compile it, and package it up in a deployable package.  This activity now happens on the order of minutes whereas, in the past, I can remember shops where this was some poor guy's entire job, and he'd spend days on each build.

And, speaking of the CI server, a lot of them run automated test suites as part of what they do.  Most commonly, this means unit tests.  But they might also invoke acceptance tests and even more exotic things like smoke, GUI, and functionality tests.  You can thus accept commits, build the software, run a bunch of test, and get it ready to deploy.

Of course, you can also automate the actual deployment as well.  It stands to reason that, if your build machine can ball it up into a deliverable, it can deliver that deliverable.  This might be harder with physical media involved, but as more software deliveries happen over networks, more of them get automated.

What We Need Next

With all of that in place, why don't we have more zero day deployments?  What's missing?

Again, discounting the problem of physical media, I'd say quality checks present the biggest issue.  We can compile, run automated tests, and deploy automatically.  But does this guarantee acceptable production behavior?

What about the important element of code reviews?  How do you assure that, even as automated tests pass, the application isn't piling up mountains of technical debt and impeding future deployments?  To get to zero day deployments, we must address these issues.

Don't get me wrong.  Other things matter here as well.  Zero day deployments require robust production checks and sophisticated "oops, that didn't work, rollback!" capabilities.  But I think that nothing will matter more than automated quality checks.

Each time you commit code, you need an intelligent analysis of that code that should fail the build as surely as failing tests if issues crop up.  In a zero day deployment context, you cannot afford best practice violations.  You cannot afford slipping quality, mounting technical debt, and you most certainly cannot afford code rot.  Today's rot in a zero day deployment scenario means tomorrow's inability to deploy that way.

Learn more how CodeIt.Right can help you automate code reviews, improve your code quality, and reduce technical debt.

About the Author

Erik Dietrich

I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

posted on Monday, 23 January 2017 08:48:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
 Thursday, 12 January 2017

A little while back, I started a post series explaining some of the CodeIt.Right rules.  I led into the post with a narrative, which I won't retell.  But I will reiterate the two rules that I follow when it comes to static analysis tooling.

  • Never implement a suggested fix without knowing what makes it a fix.
  • Never ignore a suggested fix without understanding what makes it a fix.

Because I follow these two rules, I find myself researching every fix suggested to me by my tooling.  And, since I've gone to the trouble of doing so, I'll save you that same trouble by explaining some of those rules today.  Specifically, I'll examine 3 more CodeIt.Right rules today and explain the rationale behind them.

Mark assemblies CLSCompliant

If you develop in .NET, you've no doubt run across this particular warning at some point in your career.  Before we get into the details, let's stop and define the acronyms.  "CLS" stands for "Common Language Specification," so the warning informs you that you need to mark your assemblies "Common Language Specification Compliant" (or non-compliant, if applicable).

Okay, but what does that mean?  Well, you can easily forget that many programming languages target the .NET runtime besides your language of choice.  CLS compliance indicates that any language targeting the runtime can use your assembly.  You can write language specific code, incompatible with other framework languages.  CLS compliance means you haven't.

Want an example?  Let's say that you write C# code and that you decide to get cute.  You have a class with a "DoStuff" method, and you want to add a slight variation on it.  Because the new method adds improved functionality, you decide to call it "DOSTUFF" in all caps to indicate its awesomeness.  No problem, says the C# compiler.

And yet, if you you try to do the same thing in Visual Basic, a case insensitive language, you will encounter a compiler error.  You have written C# code that VB code cannot use.  Thus you have written non-CLS compliant code.  The CodeIt.Right rule exists to inform you that you have not specified your assembly's compliance or non-compliance.

To fix, go specify.  Ideally, go into the project's AssemblyInfo.cs file and add the following to call it a day.

[assembly:CLSCompliant(true)]

But you can also specify non-compliance for the assembly to avoid a warning.  Of course, you can do better by marking the assembly compliant on the whole and then hunting down and flagging non-compliant methods with the attribute.

Specify IFormatProvider

Next up, consider a warning to "specify IFormatProvider."  When you encounter this for the first time, it might leave you scratching your head.  After all, "IFormatProvider" seems a bit... technician-like.  A more newbie-friendly name for this warning might have been, "you have a localization problem."

For example, consider a situation in which some external supplies a date.  Except, they supply the date as a string and you have the task of converting it to a proper DateTime so that you can perform operations on it.  No problem, right?

var properDate = DateTime.Parse(inputString);

That should work, provided provincial concerns do not intervene.  For those of you in the US, "03/02/1995" corresponds to March 2nd, 1995.  Of course, should you live in Iraq, that date string would correspond to February 3rd, 1995.  Oops.

Consider a nightmare scenario wherein you write some code with this parsing mechanism.  Based in the US and with most of your customers in the US, this works for years.  Eventually, though, your sales group starts making inroads elsewhere.  Years after the fact, you wind up with a strange bug in code you haven't touched for years.  Yikes.

By specifying a format provider, you can avoid this scenario.

Nested types should not be visible

Unlike the previous rule, this one's name suffices for description.  If you declare a type within another type (say a class within a class), you should not make the nested type visible outside of the outer type.  So, the following code triggers the warning.

public class Outer
{
    public class Nested
    {

    }
}

To understand the issue here, consider the object oriented principle of encapsulation.  In short, hiding implementation details from outsiders gives you more freedom to vary those details later, at your discretion.  This thinking drives the rote instinct for OOP programmers to declare private fields and expose them via public accessors/mutators/properties.

To some degree, the same reasoning applies here.  If you declare a class or struct inside of another one, then presumably only the containing type needs the nested one.  In that case, why make it public?  On the other hand, if another type does, in fact, need the nested one, why scope it within a parent type and not just the same namespace?

You may have some reason for doing this -- something specific to your code and your implementation.  But understand that this is weird, and will tend to create awkward, hard-to-discover code.  For this reason, your static analysis tool flags your code.

Until Next Time

As I said last time, you can extract a ton of value from understanding code analysis rules.  This goes beyond just understanding your tooling and accepted best practice.  Specifically, it gets you in the habit of researching and understanding your code and applications at a deep, philosophical level.

In this post alone, we've discussed language interoperability, geographic maintenance concerns, and object oriented design.  You can, all too easily, dismiss analysis rules as perfectionism.  They aren't; they have very real, very important applications.

Stay tuned for more posts in this series, aimed at helping you understand your tooling.

Learn more how CodeIt.Right can help you automate code reviews and improve your code quality.

About the Author

Erik Dietrich

I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

posted on Thursday, 12 January 2017 10:32:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
 Tuesday, 03 January 2017

Last month, I wrote a post introducing you to T4 templates.  Near the end, I included a mention of GhostDoc's use of T4 templates in automatically generating code comments.  Today, I'd like to expand on that.

To recap very briefly, recall that Ghost Doc allows you to generate things like method header comments.  I recommend that, in most cases, you let it do its thing.  It does a good job.  But sometimes, you might have occasion to want to tweak the result.  And you can do that by making use of T4 Templates.

Documenting Chess TDD

To demonstrate, let's revisit my trusty toy code base, Chess TDD.  Because I put this code together for instructional purposes and not to release as a product, it has no method header comments for IntelliSense's benefit.  This makes it the perfect candidate for a demonstration.

If I had released this as a library, I'd have started the documentation with the Board class.  Most of the client interaction would happen via Board, so let's document that.  It offers you a constructor and a bunch of semantics around placing and moving pieces.  Let's document the conceptually simple MovePiece method.

public void MovePiece(BoardCoordinate origin, BoardCoordinate destination)
{
    VerifyCoordinatesOrThrow(origin, destination);

    var pieceToMove = GetPiece(origin);
    AddPiece(pieceToMove, destination);
    RemovePiece(origin);
    pieceToMove.HasMoved = true;

    ReconcileEnPassant(origin, destination, pieceToMove);
}

To add documentation to this method, I simply right click it and, from the GhostDoc context menu, select "Document This."  Alternatively, I can use the keyboard shortcut Ctrl-Shift-D.  Either option yields the following result.

/// <summary>
/// Moves the piece.
/// </summary>
/// <param name="origin">The origin.</param>
/// <param name="destination">The destination.</param>
public void MovePiece(BoardCoordinate origin, BoardCoordinate destination)
{
    VerifyCoordinatesOrThrow(origin, destination);

    var pieceToMove = GetPiece(origin);
    AddPiece(pieceToMove, destination);
    RemovePiece(origin);
    pieceToMove.HasMoved = true;

    ReconcileEnPassant(origin, destination, pieceToMove);
}

Let's Make a Tiny Tweak

Alright, much better!  If I scrutinize the comment, I can imagine what an IntelliSense-using client will see.  My parameter naming makes this conceptually simple to understand, so the IntelliSense will tell the user that the first parameter represents the origin square and the second parameter the destination.

But let's say that as I look at this, I find myself wanting to pick at a nit.  I don't care for the summary taking up three lines -- I want to condense it to one.  How might I do that?

Well, let's crack open the T4 template for generating a method header.  Recall that you do this in Visual Studio by selecting Tools->Ghost Doc->Options, and picking "Rules" from the options pane.

blog-intro-to-t4-templates-part2-1

If you double click on "Method Template", as highlighted above, you will see an "Edit Rule" Window.  The first few lines of code in that window look like this.

<#@ template language="C#" #>
<#  CodeElement codeElement = Context.CurrentCodeElement; #>
/// <summary>
///<# GenerateSummaryText(); #>
/// </summary>
<#    if(codeElement.HasTypeParameters) 
    {
        for(int i = 0; i < codeElement.TypeParameters.Length; i++) 
        { 
            TypeParameter typeParameter = codeElement.TypeParameters[i]; 
#>

Hmmm.  I cannot count myself an expert in T4 templates, per se, but I think I have an idea.  Let's put that call to GenerateSummaryText() inline between the summary tags.  Like this:

<#@ template language="C#" #>
<#  CodeElement codeElement = Context.CurrentCodeElement; #>
/// <summary><# GenerateSummaryText(); #></summary>

That should do it, right?  Let's regenerate the comment and see what it looks like.  This results in the following.

/// <summary>Moves the piece.
/// </summary>
/// <param name="origin">The origin.</param>
/// <param name="destination">The destination.</param>
public void MovePiece(BoardCoordinate origin, BoardCoordinate destination)
{
    VerifyCoordinatesOrThrow(origin, destination);

    var pieceToMove = GetPiece(origin);
    AddPiece(pieceToMove, destination);
    RemovePiece(origin);
    pieceToMove.HasMoved = true;

    ReconcileEnPassant(origin, destination, pieceToMove);
}

Uh, oh.  It made a difference, but somehow we only got halfway there.  Why might that be?

Diving Deeper

To understand, we need to look at the template in a bit more detail.  The template itself has everything on one line, and yet we see a newline in there somehow.  Could GenerateTextSummary cause this, somehow?  Let's scroll down to look at it.  Since this method has a lot of code, here are the first few lines only.

private void GenerateSummaryText()
{
    if(Context.HasExistingTagText("summary"))
    {
        this.WriteLine(Context.GetExistingTagText("summary"));
    }
    else if(IsAsyncMethod())
    {
        this.WriteLine(Context.ExecMacro("$(MethodName.Words.ExceptLast)") + " as an asynchronous operation.");
    }
    else if(IsMainMethod())
    {
        this.WriteLine("Defines the entry point of the application.");        
    }
}

Aha!  Notice that we're calling WriteLine.  What if we did a find and replace to change all of those to just Write?  Let's try.  (To do more serious operations like this, you will want to copy the text out of the editor and into your favorite text editor in order to get more operations).

Once you have replaced all instances of WriteLine with Write in the template, here is the new result.

/// <summary>Moves the piece.</summary>
/// <param name="origin">The origin.</param>
/// <param name="destination">The destination.</param>
public void MovePiece(BoardCoordinate origin, BoardCoordinate destination)
{
    VerifyCoordinatesOrThrow(origin, destination);

    var pieceToMove = GetPiece(origin);
    AddPiece(pieceToMove, destination);
    RemovePiece(origin);
    pieceToMove.HasMoved = true;

    ReconcileEnPassant(origin, destination, pieceToMove);
}

Success!

Validation

As you play with this, you might have noticed a "Validate" button in the rule editor.  Use this liberally!  This button will trigger a parsing of the template and provide you with feedback as to validity.  The last thing you want to do is work in here for many iterations and wind up with no idea what you broke and when.

When working with these templates, think of this as equivalent to compiling.  You wouldn't want to sit for 20 minutes writing code with no feedback as to whether it builds or not.  So don't do it with these templates.

The Power at Your Disposal

I'll wrap here for this particular lesson, but understand that we have barely scratched the surface of what you can do.  In this post, we just changed a bit of the formatting to suit a whim I had.  But you can really dive into ways of reasoning about and documenting the code if you so choose.

Stay tuned for future posts on more advanced tips and tricks with your comment templates.

Learn more about how GhostDoc can help simplify your XML Comments, produce and maintain quality help documentation.

About the Author

Erik Dietrich

I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

posted on Tuesday, 03 January 2017 10:47:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
 Monday, 26 December 2016

The v3.0 of CodeIt.Right v3 is here – the new major version of our automated code review and code quality analysis product. Here are the v3.0 new feature highlights:

  • VS2017 RC integration
  • Official support for VS2015 Update 3 and ASP.NET 5/ASP.NET Core 1.0 solutions
  • Solution filtering by date, source control status and file patterns
  • Summary report view - provides a summary view of the analysis results and metrics, customize to your needs
  • New Review Code commands – review opened files and review checked out files
  • Improved Profile Editor with advanced rule search and filtering
  • Improved look and feel for Violations Report and Editor violation markers
  • Setting to keep the OnDemand and Instant Review profiles in sync
  • New Jenkins integration plugin
  • Batch correction is now turned off by default
  • Most every CodeIt.Right action now can be assigned a keyboard shortcut
  • New rules

For the complete and detailed list of the v3.0 changes see What's New in CodeIt.Right v3.0


Solution Filtering

The solution filtering feature allows to narrow the code review scope to using the following options:

  • Analyze files modified Today/This Week/Last 2 Weeks/This Month – so you can set the relative date once and not have to change the date every day
  • Analyze files modified since specific date
  • Analyze files opened in Visual Studio tabs
  • Analyze files checked out from the source control
  • Analyze only specific files – only include the files that match a list of file patters like *Core*.cs or Modules\*. See this KB post for the file path patterns details and examples.

cir-v3-solution-filtering

New Review Code commands

We have changed the Start Analysis menu to Review Code – still the same feature and the new name is just highlighting the automated code review nature of the product. Also added the following Review Code commands:

  • Analyze Open Files menu - analyze only the files opened in Visual Studio tabs
  • Analyze Checked Out Files menu - analyze only files that that are checked out from the source control

cir-v3-profile-filterImproved Profile Editor

The Profile Editor now features

  • Advanced rule filtering by rule id, title, name, severity, scope, target, and programming language
  • Allows to quickly show only active, only inactive or all rules in the profile
  • Shows totals for the profile rules - total, active, and filtered
  • Improved adding rules with multiple categories

 

Summary Report

The Summary Report tab provides an overview of the analyzed source code quality, it includes the high level summary of the current analysis information, filters, violation summary, top N violation, solution info and metrics. Additionally it provides detailed list of violations and excludes.

The report is self-contained – no external dependencies, everything it requires is included within the html file. This makes it very easy to email the report to someone or publish it on the team portal – see example.

cir-v3-summary-report-part

The Summary Report is based on an ASP.NET Razor markup within the Summary.cshtml template. This makes it very easy for you to customize it to your needs.

You will find the summary report API documentation in the help file – CodeIt.Right –> Help & Support –> Help –> Summary Report API.

cir-v3-summary-source

 

How do I try it?

Download the v5.0 at http://submain.com/download/codeit.right/

Feedback is what keeps us going!

Let us know what you think of the new version here - http://submain.com/support/feedback/


Note to the CodeIt.Right v2 users
: The v2.x license codes won't work with the v3.0. For users with active Software Assurance subscription we have sent out the v3.x license codes. If you have not received or misplaced your new license, you can retrieve it on the My Account page. Users with expired Software Assurance subscription will need to purchase the new version - currently we are not offering upgrade path other than the Software Assurance subscription. For information about the upgrade protection see our Software Assurance and Support - Renewal / Reinstatement Terms

posted on Monday, 26 December 2016 09:12:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
 Tuesday, 29 November 2016

I've heard tell of a social experiment conducted with monkeys.  It may or may not be apocryphal, but it illustrates an interesting point.  So, here goes.

Primates and Conformity

A group of monkeys inhabited a large enclosure, which included a platform in the middle, accessible by a ladder.  For the experiment, their keepers set a banana on the platform, but with a catch.  Anytime a monkey would climb to the platform, the action would trigger a mechanism that sprayed the entire cage with freezing cold water.

The smarter monkeys quickly figured out the correlation and actively sought to prevent their cohorts from triggering the spray.  Anytime a monkey attempted to climb the ladder, they would stop it and beat it up a bit by way of teaching a lesson.  But the experiment wasn't finished.

Once the behavior had been established, they began swapping out monkeys.  When a newcomer arrived on the scene, he would go for the banana, not knowing the social rules of the cage.  The monkeys would quickly teach him, though.  This continued until they had rotated out all original monkeys.  The monkeys in the cage would beat up the newcomers even though they had never experienced the actual negative consequences.

Now before you think to yourself, "stupid monkeys," ask yourself how much better you'd fare.  This video shows that humans have the same instincts as our primate cousins.

Static Analysis and Conformity

You might find yourself wondering why I told you this story.  What does it have to do with software tooling and static analysis?

Well, I find that teams tend to exhibit two common anti-patterns when it comes to static analysis.  Most prominently, they tune out warnings without due diligence.  After that, I most frequently see them blindly implement the suggestions.

I tend to follow two rules when it comes to my interaction with static analysis tooling.

  • Never implement a suggested fix without knowing what makes it a fix.
  • Never ignore a suggested fix without understanding what makes it a fix.

You syllogism buffs out there have, no doubt, condensed this to a single rule.  Anytime you encounter a suggested fix you don't understand, go learn about it.

Once you understand it, you can implement the fix or ignore the suggestion with eyes wide open.  In software design/architecture, we deal with few clear cut rules and endless trade-offs.  But you can't speak intelligently about the trade-offs without knowing the theory behind them.

Toward that end, I'd like to facilitate that warning for some CodeIt.Right rules today.  Hopefully this helps you leverage your tooling to its full benefit.

Abstract types should not have public constructors

First up, consider the idea of abstract types with public constructors.

public abstract class Shape
{
    protected ConsoleColor _color;

    public Shape(ConsoleColor color)
    {
        _color = color;
    }
}

public class Square : Shape
{
    public int SideLength { get; set; }
    public Square(ConsoleColor color) : base(color) { }

}

CodeIt.Right will ding you for making the Shape constructor public (or internal -- it wants protected).  But why?

Well, you'll quickly discover that CodeIt.Right has good company in the form of the .NET Framework guidelines and FxCop rules.  But that just shifts the discussion without solving the problem.  Why does everyone seem not to like this code?

First, understand that you cannot instantiate Shape, by design.  The "abstract" designation effectively communicates Shape's incompleteness.  It's more of a template than a finished class in that creating a Shape makes no sense without the added specificity of a derived type, like Square.

So the only way classes outside of the inheritance hierarchy can interact with Shape indirectly, via Square.  They create Squares, and those Squares decide how to go about interacting with Shape.  Don't believe me?  Try getting around this.  Try creating a Shape in code or try deleting Square's constructor and calling new Square(color).  Neither will compile.

Thus, when you make Shape's constructor public or internal, you invite users of your inheritance hierarchy to do something impossible.  You engage in false advertising and you confuse them.  CodeIt.Right is helping you avoid this mistake.

Do not catch generic exception types

Next up, let's consider the wisdom, "do not catch generic exception types."  To see what that looks like, consider the following code.

public bool MergeUsers(int user1Id, int user2Id)
{
    try
    {
        var user1 = _userRepo.Get(user1Id);
        var user2 = _userRepo.Get(user2Id);
        user1.MergeWith(user2);
        _userRepo.Save(user1);
        _userRepo.Delete(user2);
        return true;
    }
    catch(Exception ex)
    {
        _logger.Log($"Exception {ex.Message} occurred.");
        return false;
    }
}

Here we have a method that merges two users together, given their IDs.  It accomplishes this by fetching them from some persistence ignorance scheme, invoking a merge operation, saving the merged one and deleting the vestigial one.  Oh, and it wraps the whole thing in a try block, and then logs and returns false should anything fail.

And, by anything, I mean absolutely anything.  Business rules make merge impossible?  Log and return false.  Server out of memory?  Log it and return false.  Server hit by lightning and user data inaccessible?  Log it and return false.

With this approach, you encounter two categories of problem.  First, you fail to reason about or distinguish among the different things that might go wrong.  And, secondly, you risk overstepping what you're equipped to handle here.  Do you really want to handle fatal system exceptions right smack in the heart of the MergeUsers business logic?

You may encounter circumstances where you want to handle everything, but probably not as frequently as you think.  Instead of defaulting to this catch all, go through the exercise of reasoning about what could go wrong here and what you want to handle.

Avoid language specific type names in parameters

If you see this violation, you probably have code that resembles the following.  (Though, hopefully, you wouldn't write this actual method)

public int Add(int xInt, int yInt)
{
    return xInt + yInt;
}

CodeIt.Right does not like the name "int" in the parameters and this reflects a .NET Framework guideline.

Here, we find something a single language developer may not stop to consider.  Specifically, not all languages that target the .NET framework use the same type name conveniences.  You say "int" and a VB developer says "Integer."  So if a VB developer invokes your method from a library, she may find this confusing.

That said, I would like to take this one step further and advise that you avoid baking types into your parameter/variable names in general.  Want to know why?  Let's consider a likely outcome of some project manager coming along and saying, "we want to expand the add method to be able to handle really big numbers."  Oh, well, simple enough!

public long Add(long xInt, long yInt)
{
    return xInt + yInt;
}

You just needed to change the datatypes to long, and voilà!  Everything went perfectly until someone asked you at code review why you have a long called "xInt."  Oops.  You totally didn't even think about the variable names.  You'll be more careful next time.  Well, I'd advise avoiding "next time" completely by getting out of this naming habit.  The IDE can tell you the type of a variable -- don't encode it into the name redundantly.

Until Next Time

As I said in the introductory part of the post, I believe huge value exists in understanding code analysis rules.  You make better decisions, have better conversations, and get more mileage out of the tooling.  In general, this understanding makes you a better developer.  So I plan to continue with these explanatory posts from time to time.  Stay tuned!

Learn more how CodeIt.Right can help you automate code reviews and improve your code quality.

About the Author

Erik Dietrich

I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

posted on Tuesday, 29 November 2016 09:55:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
 Tuesday, 22 November 2016

Today, I'd like to tackle a subject that inspires ambivalence in me.  Specifically, I mean the subject of automated text generation (including a common, specific flavor: code generation).

If you haven't encountered this before, consider a common example.  When you file->new->(console) project, Visual Studio generates a Program.cs file.  This file contains standard includes, a program class, and a public static void method called "Main."  Conceptually, you just triggered text (and code) generation.

Many schemes exist for doing this.  Really, you just need a templating scheme and some kind of processing engine to make it happen.  Think of ASP MVC, for instance.  You write markup sprinkled with interpreted variables (i.e. Razor), and your controller object processes that and spits out pure HTML to return as the response.  PHP and other server side scripting constructs operate this way and so do code/text generators.

However, I'd like to narrow the focus to a specific case: T4 templates.  You can use this powerful construct to generate all manner of text.  But use discretion, because you can also use this powerful construct to make a huge mess.  I wrote a post about the potential perils some years back, but suffice it to say that you should take care not to automate and speed up copy and paste programming.  Make sure your case for use makes sense.

The Very Basics

With the obligatory disclaimer out of the way, let's get down to brass tacks.  I'll offer a lightning fast getting started primer.

Open some kind of playpen project in Visual Studio, and add a new item.  You can find the item in question under the "General" heading as "Text Template."

blog-intro-to-t4-templates-part1-1

Give it a name.  For instance, I called mine "sample" while writing this post.  Once you do that, you will see it show up in the root directory of your project as Sample.tt.  Here is the text that it contains.

<#@ template debug="false" hostspecific="false" language="C#" #>
<#@ assembly name="System.Core" #>
<#@ import namespace="System.Linq" #>
<#@ import namespace="System.Text" #>
<#@ import namespace="System.Collections.Generic" #>
<#@ output extension=".txt" #>

Save this file.  When you do so, Visual Studio will prompt you with a message about potentially harming your computer, so something must be happening behind the scenes, right?  Indeed, something has happened.  You have generated the output of the T4 generation process.  And you can see it by expanding the caret next to your Sample.tt file as shown here.

blog-intro-to-t4-templates-part1-2

If you open the Sample.txt file, however, you will find it empty.  That's because we haven't done anything interesting yet.  Add a new line with the text "hello world" to the bottom of the Sample.tt file and then save.  (And feel free to get rid of that message about harming your computer by opting out, if you want).  You will now see a new Sample.txt file containing the words "hello world."

Beyond the Trivial

While you might find it satisfying to get going, what we've done so far could be accomplished with file copy.  Let's take advantage of T4 templating in earnest.  First up, observe what happens when you change the output extension.  Make it something like .blah and observe that saving results in Sample.blah.  As you can see, there's more going on than simple text duplication.  But let's do something more interesting.

Update your Sample.tt file to contain the following text and then click save.

<#@ template debug="false" hostspecific="false" language="C#" #>
<#@ assembly name="System.Core" #>
<#@ import namespace="System.Linq" #>
<#@ import namespace="System.Text" #>
<#@ import namespace="System.Collections.Generic" #>
<#@ output extension=".txt" #>
<#
for(int i = 0; i < 10; i++)
    WriteLine($"Hello World {i}");
#>

When you open Sample.txt, you will see the following.

Hello World 0
Hello World 1
Hello World 2
Hello World 3
Hello World 4
Hello World 5
Hello World 6
Hello World 7
Hello World 8
Hello World 9

Pretty neat, huh?  You've used the <# #> tokens to surround first class C# that you can use to generate text.  I imagine you can see the potential here.

Oh, and what happens when you type malformed C#?  Remove the semicolon and see for yourself.  Yes, Visual Studio offers you feedback about bad T4 template files.

Use Cases

I'll stop here with the T4 tutorial.  After all, I aimed only to provide an introduction.  And I think that part of any true introduction involves explaining where and how the subject might prove useful to readers.  So where do people reasonably use these things?

Perhaps the most common usage scenario pertains to ORMs and the so-called impedance mismatch problem.  People create code generation schemes that examine databases and spit out source code that matches with them.  This approach spares the significant performance hit of some kind of runtime scheme for figuring this out, but without forcing tedious typing on dev teams.  Entity Framework makes use of T4 templates.

I have seen other uses as well, however.  Perhaps your organization puts involved XML configuration files into any new projects and you want to generate these without copy and paste.  Or, perhaps you need to replace an expensive reflection/runtime scheme for performance reasons.  Maybe you have a good bit of layering boilerplate and object mapping to do.  Really, the sky is the limit here, but always bear in mind the caveat that I offered at the beginning of this post.  Take care not to let code/text generation be a crutch for cranking out anti-patterns more rapidly.

The GhostDoc Use Case

I will close by offering a tie-in with the GhostDoc offering as the final use case.  If you use GhostDoc to generate comments for methods and types in your codebase, you should know that you can customize the default generations using T4 templates.  (As an aside, I consider this a perfect use case for templating -- a software vendor offering a product to developers that assists them with writing code.)

If you open GhostDoc's options pane and navigate to "Rules" you will see the following screen.  Double clicking any of the templates will give you the option to edit them, customizing as you see fit.

blog-intro-to-t4-templates-part1-3

You can thus do simple things, like adding some copyright boilerplate, for instance.  Or you could really dive into the weeds of the commenting engine to customize to your heart's content (be careful here, though).  You can exert a great deal of control.

T4 templates offer you power and can make your life easier when used judiciously.  They're definitely a tool worth having in your tool belt.  And, if you make use of GhostDoc, this is doubly true.

Learn more about how GhostDoc can help simplify your XML Comments, produce and maintain quality help documentation.

About the Author

Erik Dietrich

I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

posted on Tuesday, 22 November 2016 09:23:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
 Monday, 21 November 2016

Version 5.4 of GhostDoc is a maintenance update for the v5.0 users:

  • VS2017 RC integration
  • New menu items - Getting Started Tutorial and Tutorials and Resources
  • (Pro) (Ent) Edit buttons in Options - Solution Ignore List and Options - Spelling Ignore List
  • (Pro) (Ent) Test button in Options - Solution Ignore List
  • (Ent) Now GhostDoc shows error message when Conceptual Content path is invalid in the solution configuration file
  • Fixed PathTooLongException exception when generating preview/build help file for C++ projects
  • (Ent) Updated ClassLibrary1.zip, moved all conceptual content files inside the project in GhostDoc Enterprise\Samples\Conceptual Content\
  • Improved documenting ReadOnly auto-properties in VB
  • Resolved issue re-documenting a type at the top of source code file in VB
  • Resolved issue with generating preview of the tag for generics in VB

For the complete list of changes, please see What's New in GhostDoc v5

For overview of the v5.0 features, visit Overview of GhostDoc v5.0 Features

Download the new build at http://submain.com/download/ghostdoc/

posted on Monday, 21 November 2016 09:15:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
 Thursday, 17 November 2016

We have just made available the Release Candidate of CodeIt.Right v3.0, here is the new feature highlights:

  • VS2017 RC integration
  • Solution filtering by date, source control status and file patterns
  • Summary report view (announced as the Dashboard in the Beta preview) - provides a summary view of the analysis results and metrics, customize to your needs

These features were announced as part of our recent v3 Beta:

  • Official support for VS2015 Update 2 and ASP.NET 5/ASP.NET Core 1.0 solutions
  • New Review Code commands:
    • only opened files
    • only checked out files
    • only files modified after specific date
  • Improved Profile Editor with advanced rule search and filtering
  • Improved look and feel for Violations Report and Editor violation markers
  • New rules
  • Setting to keep the OnDemand and Instant Review profiles in sync
  • New Jenkins integration plugin
  • Batch correction is now turned off by default
  • Most every CodeIt.Right action now can be assigned a keyboard shortcut
  • For the Beta changes and screenshots, please see Overview of CodeIt.Right v3.0 Beta Features

For the complete and detailed list of the v3.0 changes see What's New in CodeIt.Right v3.0

To give the v3.0 Release Candidate a try, download it here - http://submain.com/download/codeit.right/beta/


Solution Filtering

In addition to the solution filtering by modified since specific date, open and checked out files available in the Beta, we are introducing few more options:

  • Analyze files modified Today/This Week/Last 2 Weeks/This Month – so you can set the relative date once and not have to change the date every day
  • Analyze only specific files – only include the files that match a list of file patters like *Core*.cs or Modules\*. See this KB post for the file path patterns details and examples.

cir-v3-solution-filtering

Summary Report

The Summary Report tab provides an overview of the analyzed source code quality, it includes the high level summary of the current analysis information, filters, violation summary, top N violation, solution info and metrics. Additionally it provides detailed list of violations and excludes.

The report is self-contained – no external dependencies, everything it requires is included within the html file. This makes it very easy to email the report to someone or publish it on the team portal – see example.

cir-v3-summary-report-part

The Summary Report is based on an ASP.NET Razor markup within the Summary.cshtml template. This makes it very easy for you to customize it to your needs.

You will find the summary report API documentation in the help file – CodeIt.Right –> Help & Support –> Help –> Summary Report API.

cir-v3-summary-source

 

Feedback

We would love to hear your feedback on the new features! Please email it to us at support@submain.com or post in the CodeIt.Right Forum.

posted on Thursday, 17 November 2016 08:55:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
 Wednesday, 16 November 2016

We are looking for your input and we're willing to bribe you for answering one very simple question: What are your biggest code documentation challenges right now?

The survey is super-quick and we're offering a $20 discount code for your time (good with any new SubMain product license purchase) that you will automatically receive once you complete the survey as our thank you.

Take the Survey

We'd also appreciate it if you'd help us out by tweeting about this using the link Share on Twitter or otherwise letting folks know we're interested to know their code documentation challenges.

Thanks for your help!

posted on Wednesday, 16 November 2016 09:23:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
 Saturday, 05 November 2016
blog-so-you’ve-inherited-a-legacy-codebase

During my younger days, I worked for a company that made a habit of a strategic acquisition.  They didn't participate in Time Warner style mergers, but periodically they would purchase a smaller competitor or a related product.  And on more than one occasion, I inherited the lead role for the assimilating software from one of these organizations.  Lucky me, right?

If I think in terms of how to describe this to someone, a plumbing analogy comes to mind.  Over the years, I have learned enough about plumbing to handle most tasks myself.  And this has exposed me to the irony of discovering a small leak in a fitting plugged by grit or debris.  I find this ironic because two wrongs make a right.  A dirty, leaky fitting reaches sub-optimal equilibrium, and you spring a leak when you clean it.

Legacy codebases have this issue as well.  You inherit some acquired codebase, fix a tiny bug, and suddenly the defect floodgates open.  And then you realize the perilousness of your situation.

While you might not have come by it in the same way that I did, I imagine you can relate.  At some point or another, just about every developer has been thrust into supporting some creaky codebase.  How should you handle this?

Put Your Outrage in Check

First, take some deep breaths.  Seriously, I mean it.  As software developers, we seem to hate code written by others.  In fact, we seem to hate our own code if we wrote it more than a few months ago.  So when you see the legacy codebase for the first time, you will feel a natural bias toward disgust.

But don't indulge it.  Don't sit there cursing the people that wrote the code, and don't take screenshots to send to the Daily WTF.  Not only will it do you no good, but I'd go so far as to say that this is actively counterproductive.  Deciding that the code offers nothing worth salvaging makes you less inclined to try to understand it.

The people that wrote this code dealt with older languages, older tooling, older frameworks, and generally less knowledge than we have today.  And besides, you don't know what constraints they faced.  Perhaps bosses heaped delivery pressure on them like crazy.  Perhaps someone forced them to convert to writing in a new, unfamiliar language.  Whatever the case may be, you simply didn't walk in their shoes.  So take a breath, assume they did their best, and try to understand what you have under the hood.

Get a Visualization of the Architecture

Once you've settled in mentally for this responsibility, seek to understand quickly.  You won't achieve this by cracking open the code and looking through random source files.  But, beyond that, you also won't achieve it by looking at their architecture documents or folder structures.  Reality gets out of sync with intention, and those things start to lie.  You need to see the big picture, but in a way that lines up with reality.

Look for tools that map dependencies and can generate a visual of the codebase.  Plenty of these tools exist for you and can automate visual depictions.  Find one and employ it.  This will tell you whether the architecture resembles the neat diagram given to you or not.  And, more importantly, it will get you to a broad understanding much more quickly.

Characterize

Once you have the picture you need of the codebase and the right frame of mind, you can start doing things to it.  And the first thing you should do is to start writing characterization tests.

If you have not heard of them before, characterization tests have the purpose of, well, characterizing the codebase.  You don't worry about correct or incorrect behaviors.  Instead, you accept at face value what the code does, and document those behaviors with tests.  You do this because you want to get a safety net in place that tells you when your changes affect inputs and outputs.

As this XKCD cartoon ably demonstrates, someone will come to depend on the application's production behavior, however problematic.  So with legacy code, you cannot simply decide to improve a behavior and assume your users will thank you.  You need to exercise caution.

But characterization tests do more than just provide a safety net.  As an exercise, they help you develop a deeper understanding of the codebase.  If the architectural visualization gives you a skeleton understanding, this starts to put meat on the bones.

Isolate Problems

With a reliable safety net in place, you can begin making strategic changes to the production code beyond simple break/fix.  I recommend that you start by finding and isolating problematic chunks of code.  In essence, this means identifying sources of technical debt and looking to improve, gradually.

This can mean pockets of global state or extreme complexity that make for risky change.  But it might also mean dependencies on outdated libraries, frameworks, or APIs.  In order to extricate yourself from such messes, you must start to isolate them from business logic and important plumbing code.  Once you have it isolated, fixes will come more easily.

Evolve Toward Modernity

Once you've isolated problematic areas and archaic dependencies, it certainly seems logical to subsequently eliminate them.  And, I suggest you do just that as a general rule.  Of course, sometimes isolating them gives you enough of a win since it helps you mitigate risk.  But I would consider this the exception and not the rule.  You want to remove problem areas.

I do not say this idly nor do I say it because I have some kind of early adopter drive for the latest and greatest.  Rather, being stuck with old tooling and infrastructure prevents you from taking advantage of modern efficiencies and gains.  When some old library prevents you from upgrading to a more modern language version, you wind up writing more, less efficient code.  Being stuck in the past will cost you money.

The Fate of the Codebase

As you get comfortable and take ownership of the legacy codebase, never stop contemplating its fate.  Clearly, in the beginning, someone decided that the application's value outweighed its liability factor, but that may not always continue to be true.  Keep your finger on the pulse of the codebase, while considering options like migration, retirement, evolution, and major rework.

And, finally, remember that taking over a legacy codebase need not be onerous.  As initially shocked as I found myself with the state of some of those acquisitions, some of them turned into rewarding projects for me.  You can derive a certain satisfaction from taking over a chaotic situation and gradually steer it toward sanity.  So if you find yourself thrown into this situation, smile, roll up your sleeves, own it and make the best of it.

Related resources

Tools at your disposal

SubMain offers CodeIt.Right that easily integrates into Visual Studio for flexible and intuitive automated code review solution that works real-time, on demand, at the source control check-in or as part of your build.

Learn more how CodeIt.Right can identify technical debt, document it and gradually improve the legacy code.

About the Author

Erik Dietrich

I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

    posted on Saturday, 05 November 2016 10:43:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Tuesday, 25 October 2016

    If you spend enough years writing software, sooner or later, your chosen vocation will force you into reverse engineering.  Some weird API method with an inscrutable name will stymie you.  And you'll have to plug in random inputs and examine the outputs to figure out what it does.

    blog-elements-of-helpful-code-documentationClearly, this wastes your time.  Even if you enjoy the detective work, you can't argue that an employer or client would view this as efficient.  Library and API code should not require you to launch a mystery investigation to determine what it does.

    Instead, such code should come with appropriate documentation.  This documentation should move your focus from wondering what the code does to contemplating how best to leverage it.  It should make your life easier.

    But what constitutes appropriate documentation?  What particular characteristics does it have?  In this post, I'd like to lay out some elements of helpful code documentation.

    Elements of Style

    Before moving on to what the documentation should contain, I will speak first about its stylistic properties.  After all, poorly written documentation can tank understanding, even if it theoretically contains everything it should.  If you're going to write it, make it good.

    Now don't get me wrong -- I'm not suggesting you should invest enough time to make it a literary masterpiece.  Instead, focus on three primary characteristics of good writing: clarity, correctness, and precision.  You want to make sure that readers understand exactly what you're talking about.  And, obviously, you cannot get anything wrong.

    The importance of this goes beyond just the particular method in question.  It affects your entire credibility with your userbase.  If you confuse them with ambiguity or, worse, get something wrong, they will start to mistrust you.  The documentation becomes useless to them and your reputation suffers.

    Examples

    Once you've gotten your house in order with stylistic concerns in the documentation, you can decide on what to include.  First up, I cannot overstate the importance of including examples.

    Whether you find yourself documenting a class, a method, a web service call, or anything else, provide examples.  Show the users the code in action and let them apply their pattern matching and deduction skills.  In case you hadn't noticed, programmers tend to have these in spades.

    Empathize with the users of your code.  When you find yourself reading manuals and documentation, don't you look for examples?  Don't you prefer to grab them and tweak them to suit your current situation?  So do the readers of your documentation.  Oblige them. (See <example />)

    Conditions

    Next up, I'll talk about the general consideration of "conditions."  By this, I mean three basic types of conditions: preconditions, postconditions, and invariants.

    Let me define these in broad terms so that you understand what I mean.  Respectively, preconditions, postconditions, and invariants are things that must be true before your code executes, things that must be true after it executes, and things that must remain true throughout.

    Documenting this information for your users saves them trial and error misery.  If you leave this out, they may have to discover for themselves that the method won't accept a null parameter or that it never returns a positive number.  Spare them that trial and error experimentation and make this clear.  By telling them explicitly, you help them determine up front whether this code suits their purpose or not. (See <remarks /> and <note />)

    Related Elements

    Moving out from core principles a bit, let's talk about some important meta-information.  People don't always peruse your documentation in "lookup" mode, wanting help about a code element whose name they already know.  Instead, sometimes they will 'surf' the documentation, brainstorming the best way to tackle a problem.

    For instance, imagine that you want to design some behavior around a collection type.  Familiar with List, you look that up, but then maybe you poke around to see what inherits from the same base or implements the same interface.  By doing this, you hope to find the perfect collection type to suit your needs.

    Make this sort of thing easy on readers of your documentation by offering a concept of "related" elements.  Listing OOP classes in the same hierarchy represents just one example of what you might do.  You can also list all elements with a similar behavior or a similar name.  You will have to determine for yourself what related elements make sense based on context.  Just make sure to include them, though. (See <seealso /> )

    Pitfalls and Gotchas

    Last, I'll mention an oft-overlooked property of documentation.  Most commonly, you might see this when looking at the documentation for some API call.  Often, it takes the form of "exceptions thrown" or "possible error codes."

    But I'd like to generalize further here to "pitfalls and gotchas."  Listing out error codes and exceptions is great because it lets users know what to expect when things go off the rails.  But these aren't the only ways that things can go wrong, nor are they the only things of which users should be aware.

    Take care to list anything out here that might violate the principle of least surprise or that could trip people up.  This might include things like, "common ways users misuse this method" or "if you get output X, check that you set Y correctly."  You can usually populate this section pretty easily whenever a user struggles with the documentation as-is.

    Wherever you get the pitfalls, just be sure to include them.  Believe it or not, this kind of detail can make the difference between adequate and outstanding documentation.  Few things impress users as much as you anticipating their questions and needs. (See <exception />, <returns /> and <remarks />)

    Documentation Won't Fix Bad Code

    In closing, I would like to offer a thought that returns to the code itself.  Writing good documentation is critically important for anyone whose code will be consumed by others -- especially those selling their code.  But it all goes for naught should you write bad or buggy code, or should your API present a mess to your users.

    Thus I encourage you to apply the same scrutiny to the usability of your API that I have just encouraged you to do for your documentation.  Look to ensure that you offer crisp, clear abstractions.  Name code elements appropriately.  Avoid surprises to your users.

    Over the last decade or so, organizations like Apple have moved us away from hefty user manuals in favor of "discoverable" interfaces.  Apply the same principle to your code.  I tell you this not to excuse you from documentation, but to help you make your documentation count.  When your clean API serves as part of your documentation, you will write less of it, and what you do write will have higher value to readers.

    Learn more about how GhostDoc can help simplify your XML Comments, produce and maintain quality help documentation.

    About the Author

    Erik Dietrich

    I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

    posted on Tuesday, 25 October 2016 10:53:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Wednesday, 19 October 2016

    The balance among types of feedback drives some weird interpersonal dynamics and balances.  For instance, consider the rather trite (if effective) management technique of the "compliment sandwich."  Managers with a negative piece of feedback precede and follow that feedback with compliments.  In that fashion, the compliments form the "bun."

    Different people and different groups have their preferences for how to handle this.  While some might bend over backward for diplomacy others prefer environments where people hurl snipes at one another and simply consider it "passionate debate."  I have no interest arguing for any particular approach -- only in pointing out the variety.  As it turns out, we humans find this subject thorny.

    To some extent, this complicated situation extends beyond human boundaries and into automated systems.  While we might not take quite the same umbrage as we would with humans, we still get frustrated.  If you doubt this, I challenge you to tell me that you have never yelled at a compiler because you were sure your code had no errors.  I thought so.

    So from this perspective, I can understand the frustration with static analysis feedback.  Often, when you decide to enable a new static analysis engine or linting tool on a codebase, the feedback overwhelms.  28,326 issues the code can demoralize anyone.  And so the temptation emerges to recoil from this feedback and turn off the tool.

    But should you do this?  I would argue that usually, you should not.  But situations do exist when disabling a static analyzer makes sense.  Today, I'll walk through some examples of times you might suppress such a warning.

    False Positives

    For the first example, I'll present something of a no-brainer.  However, I will also present a caveat to balance things.

    If your static analysis tool presents you with a false positive, then you should suppress that instance of the false positive.  (No sense throwing the baby out with the bathwater and suppressing the entire rule).  Assuming that you have a true false positive, the analysis warning simply constitutes noise and not signal.  Get rid of it.

    That being said, take care with labeling warnings as false positives.  False positive means that the tool has indicated a problem and a potential error and gotten it wrong.  False positive does not mean that you disagree with the warning or don't care.  The tool's wrongness is a good reason to suppress -- you not liking its prognosis false short of that.

    Non-Applicable Code

    For the second kind of instance, I'll use the term "non-applicable code."  This describes code for which you have no interest in static analysis warnings.  While this may sound contradictory to the last point, it differs subtly.

    You do not control all code in your codebase, and not all code demands the same level of scrutiny about the same concepts.  For example, do you have code in your codebase driven by a framework?  Many frameworks force some sort of inheritance scheme on you or the implementation of an interface.  If the name of a method on a third party interface violates a naming convention, you need not be dinged by your tool for simply implementing it.

    In general, you'll find warnings that do not universally apply.  Test projects differ from your production code.  GUI projects differ from data access layer ones.  And NuGet packages or generated code remain entirely outside of your control.  Assuming the decision to use these things happened in the past, turning off the analysis warnings makes sense.

    Cosmetic Code Counter to Your Team's Standard

    So far, I've talked about the tool making a mistake and the tool getting things right on the wrong code.  This third case presents a thematically similar consideration.  Instead of a mistake or misapplication, though, this involves a misfit.

    Many tools out there offer purely cosmetic concerns.  They'll flag field variables not prepended with underscores or methods with camel casing instead of Pascal casing.  Assuming those jive with your team's standards, you have no issues.  But if they don't, you have two options: change the tool or change your standard.  Generally speaking, you probably want to err on the side of complying with broad standards.  But if your team is set with your standard, then turn off those warnings or configure the tool.

    When You're Buried in Warnings

    Speaking of warnings, I'll offer another point that relates to them, but with an entirely different theme.  When your team is buried in warnings, you need to take action.

    Before I talk about turning off warnings, however, consider fixing them en masse.  It may seem daunting, but I suspect that you might find yourself surprised at how quickly you can wrangle a manageable number.

    However, if this proves too difficult or time-consuming, consider force ranking the warnings, and (temporarily) turning off all except the top, say, 200.  Make it part of your team's work to eliminate those, and then enable the next 200.  Keep at it until you eliminate the warnings.  And remember, in this case, you're disabling warnings only temporarily.  Don't forget about them.

    When You Have an Intelligent Disagreement

    Last up comes the most perilous reason for turning off static analysis warnings.  This one also happens to occur most frequently, in my experience.  People turn them off because they know better than the static analysis tool.

    Let's stop for a moment and contemplate this.  Teams of workaday developers out there tend to blithely conclude that they know their business.  In fact, they know their business better than people whose job it is to write static analysis tools that generate these warnings.  Really?  Do you like those odds?

    Below the surface, disagreement with the tool often masks resentment at being called "wrong" or "non-compliant."  Turning the warnings off thus becomes a matter of pride or mild laziness.  Don't go this route.

    If you want to ignore warnings because you believe them to be wrong, do research first.  Only allow yourself to turn off warnings when you have a reasoned, intelligent, research-supported argument as to why you should do so.

    When in Doubt, Leave 'em On

    In this post, I have gingerly walked through scenarios in which you may want to turn off static analysis warnings and guidance.  For me, this exercise produces some discomfort because I rarely find this advisable.  My default instinct is thus not to encourage such behavior.

    That said, I cannot deny that you will encounter instances where this makes sense.  But whatever you do, avoid letting this become common or, worse, your default.  If you have the slightest bit of doubt, leave them on.   Put your trust in the vendors of these tools -- they know their business.  And steering you in bad directions is bad for business.

    Learn more how CodeIt.Right can automate your team standards, makes it easy to ignore specific guidance violations and keep track of them.

    About the Author

    Erik Dietrich

    I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

    posted on Wednesday, 19 October 2016 16:19:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Tuesday, 11 October 2016

    More years ago than I'd care to admit, I took a software engineering course as part of my graduate CS program.  At the time, I worked a full-time job during the day and did remote classes in the evening.  As a result, I disproportionately valued classes with applicability to my job.  And this class offered plenty of that.

    We scratched the surface on such diverse topics as agile methodologies, automated testing, cost of code ownership, and more.  But I found myself perhaps most interested by the dive we did into refactoring.  The idea of reworking the internal structure of code while preserving inputs and outputs is a surprisingly complex one.

    Historical Complexity of Refactoring

    At the risk of dating myself, I took this course in the fall of 2006.  While automated refactorings in your IDE now seem commonplace, back then, they were hard.  In fact, the professor of the course considered them to be sufficiently difficult as to steer a group of mine away from a project implementing some.  In the world of 2006, I suspect he had the right of it.  We steered clear.

    In 2016, implemented automated refactorings still present a challenge.  But modern tool and IDE vendors can stand on the shoulders of giants, so to speak.  Back then?  Not so much.

    Refactorings present a unique challenge to tool vendors because of the inherent risk.  They can really screw up users' code.  If a mistake happens, best case scenario is that the resultant code fails to compile because then, at least, it fails fast.  Worse still is semantically and syntactically correct code that somehow behaves improperly.  In this situation, a refactoring -- a safe change to code -- becomes a modification to the behavior of production code instead.  Ouch.

    On top of the risk, the implementation of refactoring anywhere beyond the trivial involves heady concepts such as abstract syntax trees.  In other words, it's not for lightweights.  So to recap, refactoring is risky and difficult.  And this is the landscape faced by tool authors.

    I Don't Fix -- I Just Flag

    If you live in the US, you may have seen a commercial that features a funny quip.  If I'm not mistaken, it advertises for some sort of fraud prevention services.  (Pardon any slight inaccuracies, as I recount this as best I can, from memory.)

    In the ad, bank robbers hold a bank hostage in a rather cliché, dramatic scene.  Off to the side, a woman stands near a security guard, asking him why he didn't do anything to stop it.  "I'm not a robbery prevention service -- I'm a robbery monitoring service.  Oh, by the way, there's a robbery." (here is a copy of the commercial)

    It brings a chuckle, but it also brings an underlying point.  In many situations, monitoring alone can prove woefully ineffective, prompting frustration.  As a former manager and current consultant, I generally advise people that they should only point out problems when they have also prepared proposed solutions.  It can mean the difference between complaining and solving.

    So you can imagine and probably share my frustration at tools that just flag problems and leave it to you to investigate further and fix them.  We feel like the woman standing next to the "robbery monitor," wondering how useful the service is to us.

    Levels of Solution

    Going back to the subject of software development, we see this dynamic in a number of places.  The compiler, the IDE, productivity add-ins, static analysis tools, and linting utilities all offer us warnings to heed.

    Often, that's all we get.  The utility says, "hey, something is wrong here, but you're going to have to figure out what."  I tend to think of that as the basic level of service, or level 0, if you will.

    The next level, level 1, involves at least offering some form of next action.  It might be as simple as offering a help file, inline reading, or a link to more information.  Anything above "this is a problem."

    Level 2 ups the ante by offering a recommendation for what to do next.  "You have a dependency cycle.  You should fix this by looking at these three components and removing one mutual dependency."  It goes beyond giving you a next thing to do and gives you the next thing to do.

    Level 3 rounds out the field by actually performing the action for you (following a prompt, of course).  "You've accidentally hidden a method on the parent class.  Click here to rename or click here to make parent virtual."  That's just an example off the top, of course, but it illustrates the interaction paradigm.  "We've noticed a problem, and you can click here to fix it."

    Fixes in Your Tooling

    blog-dont-just-flag-it-fix-it-irWhen evaluating your own tools, look to climb as high up this hierarchy as you can.  Favor tools that identify problems, but offer fixes whenever possible.

    There are a number of such tools out there, including CodeIt.Right.  Using tools like this is a pleasure because it removes the burden of research and implementation from you.  Well, you can always do the research if you want, but at your own leisure.  But it's much better to do research at your leisure than when you're trying to accomplish something else.

    The other, important concern here is that you find trusted tooling to help you with this sort of thing.  After all, you don't want something messing with your source code if it might mess up your source code.  But, assuming you can trust it, this provides an invaluable boost to your effectiveness by automatically resolving your problems and by helping you learn.

    In the year 2016, we have far more tooling available, with a far better track record, than we did in 2006.  Leverage it whenever possible so that you can focus on solving the pressing problems of your day to day work.

    Tools at your disposal

    SubMain offers CodeIt.Right that easily integrates into Visual Studio for flexible and intuitive "We've noticed a problem, and you can click here to fix it." solution.

    Learn more how CodeIt.Right can automate your team standards and improve code quality.

    About the Author

    Erik Dietrich

    I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

    posted on Tuesday, 11 October 2016 08:41:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Thursday, 06 October 2016

    Before I get down to the brass tacks of how to do some interesting stuff, I'm going to spin a tale of woe.  Well, I might have phrased that a little strongly.  Call it a tale of corporate drudgery.

    In any case, many years ago I worked briefly in a little department, at a little company that seemed to be a corporate drudgery factory.  Oh, the place and people weren't terrible.  But the work consisted of, well, drudgery.  We 'consulted' in the sense that we cranked out software for other companies, for pay.  Our software plumbed the lines of business between client CRMs and ERPs or whatever.  We would write the software, then finish the software, then hand the software over, source code and all.

    Naturally, commenting our code and compliance with the coding standard attained crucial importance.  Why?  Well, no practical reason.  It was just that clients would see this code.  So it needed to look professional.  Or something.  It didn't matter what the comments said.  It didn't matter if the standard made sense.  Compliance earned you a gold star and a move onto the next project.

    As I surveyed the scene surrounding me, I observed a mountain of vacuous comments and dirty, but uniform code.

    My Complex Relationship with Code Comments

    My brief stay with (and departure from) this organization coincided with my growing awareness of the Software Craftsmanship movement.  Even as they copy and pasted their way toward deadlines and wrote comments announcing that while(x < 6) would proceed while x was less than 6, I became interested in the idea of the self-documenting code.

    Up to that point, I had diligently commented each method, file, and type I encountered.  In this regard, I looked out for fellow and future programmers.  But after one too many occasions of watching my own comments turn into lies when someone changed the code without changing the comments, I gave up.  I stopped commenting my code, focusing entirely on extractions, refactoring, and making my code as legible as possible.

    I achieved an equilibrium of sorts.  In this fashion, I did less work and stopped seeing my comments become nasty little fibs.  But a single, non-subtle flaw remained in this absolutist approach.  What about documentation of a public (or internal) API?

    Naturally, I tried to apply the craftsmanship-oriented reasoning unilaterally.  Just make the public API so discoverable as to render the issue moot.  But that never totally satisfied me because I still liked my handy help screens and IntelliSense info when consuming others' code.

    And so I came to view XML doc comments on public methods as an exception.  These, after all, did not represent "comments."  They came packaged with your deliverables as your product.  And I remain comfortable with that take today.

    Generating Help More Efficiently

    Now, my nuanced evolved view doesn't automatically mean I'll resume laboriously hand-typing XML comments.  Early in my career, a sort of sad pride in this "work harder, not smarter" approach characterized my development.  But who has time for that anymore?

    Instead, with a little bit of investment in learning and tooling, you can do some legitimately cool stuff.  Let me take you through a nifty sequence of steps that you may come to love.

    GhostDoc Enterprise

    First up, take a look at the GhostDoc Enterprise offering.    Among other things, this product lets you quickly generate XML comments, customize the default generation template, spell check your code, generate help documentation and more.  Poking through all that alone will probably take some time out of your day.  You should download and play with the product.

    Once you are done with that, though, consider how you might get more efficient at beefing up your API.  For the rest of this post, I will use as an example my Chess TDD project.  I use this as a toy codebase for all kinds of demos.

    I never commented this codebase, nor did I generate any kind of documentation for it.  Why?  I intended it solely as a teaching tool for test-driven development, and never packaged it for others' consumption.  Let's change that today.

    Adding Comments

    Armed with GhostDoc enterprise, I will first generate some comments.  The Board class makes a likely candidate since that offers theoretical users the most value.

    First up, I need to add XML doc comments to the file.  I can do this by right clicking in the file, and selecting "Document Type" from the GhostDoc Enterprise context menu.  Here's what the result looks like.

    blog-generate-docs-from-your-build-1

    The default template offers a pretty smart guess at intent, based on good variable naming.  For my fellow clean code enthusiasts out there, you can even check how self-documenting your code is by the quality of the comments GhostDoc creates.  But still, you probably want to take a human pass through, checking and tweaking where needed.

    Building Help Documentation

    All right.  With comments in place for the public facing API of my little project, we can move on to the actual documentation.  Again, easy enough.  Select "Tools -> GhostDoc Enterprise -> Build Help Documentation" from the main menu.  You'll see this screen.

    blog-generate-docs-from-your-build-2

    Notice that you have a great deal of control over the particulars.  Going into detail here is beyond the scope of my post, but you can certainly play around.  I'll take the defaults and build a CHM help file.  Once I click "OK", here's what I see (once I go to the board class).

    blog-generate-docs-from-your-build-3

    Pretty slick, huh?  Seriously.  With just a few clicks, you get intelligently commented public methods and a professional-looking help file.  (You can also have this as web-style documentation if you want).  Obviously, I'd want to do some housekeeping here if I were selling this, but it does a pretty good job even with zero intervention from me.

    Do It From the Build

    Only one bit of automation remains at this point.  And that's the generation of this documentation from the build.  Fortunately, GhostDoc Enterprise makes that simple as well.

    Any build system worth its salt will, of course, let you hook command line invocations into your build.  GhostDoc Enterprise offers one up for just this occasion.  You can read a succinct guide on that right here.  With a single command, you can point it at your solution, a help configuration, and a project configuration, and generate the help file.  Putting it where you want is then easy enough.

    Tying this in with an automated build or CI setup really ties everything together, including the theme of this post.  Automating the generation of clean, helpful documentation of your clean code, building it, and packaging it up all without human intervention pretty much represents the pinnacle of delivering a professional product.

    Learn more about how GhostDoc can help simplify your XML Comments, produce and maintain quality help documentation.

    About the Author

    Erik Dietrich

    I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

    posted on Thursday, 06 October 2016 06:54:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Thursday, 29 September 2016

    In professional contexts, I think that the word "standard" has two distinct flavors.  So when we talk about a "team standard" or a "coding standard," the waters muddy a bit.  In this post, I'm going to make the case for a team standard.  But before I do, I think it important to discuss these flavors that I mention.  And keep in mind that we're not talking dictionary definition as much as the feelings that the word evokes.

    blog-case-for-team-standardFirst, consider standard as "common."  To understand what I mean, let's talk cars.  If you go to buy a car, you can have an automatic transmission or a standard transmission.  Standard represents a weird naming choice for this distinction since (1) automatic transmissions dominate (at least in the US) and (2) "manual" or "stick-shift" offer much better descriptions.  But it's called "standard" because of historical context.  Once upon a time, automatic was a new sort of upgrade, so the existing, default option became boringly known as "standard."

    In contrast, consider standard as "discerning."  Most commonly you hear this in the context of having standards.  If some leering, creepy person suggested you go out on a date to a fast food restaurant, you might rejoin with, "ugh, no, I have standards."

    Now, take these common contexts for the word to the software team room.  When someone proposes coding standards, the two flavors make themselves plain in the team members' reactions.  Some like the idea, and think, "it's important to have standards and take pride in our work."  Others hear, "check your creativity at the gate, because around here we write standard, default code."

    What I Mean by Standard

    Now that I've drawn the appropriate distinction, I feel it appropriate to make my case.  When I talk about the importance of a standard, I speak with the second flavor of the word in mind.  I speak about the team looking at its code with a discerning attitude.  Not just any code can make it in here -- we have standards.

    These can take somewhat fluid forms, and I don't mean to be prescriptive.  The sorts of standards that I like to see apply to design principles as much as possible and to cosmetic concerns only when they have to.

    For example, "all non-GUI code should be test driven" and "methods with more than 20 lines should require a conversation to justify them" represent the sort of standards I like my teams to have.  They say, "we believe in TDD" and "we view long methods as code smells," respectively.  In a way, they represent the coding ethos of the group.

    On the other side of the fence lie prescriptions like, "all class fields shall be prepended with underscores" and "all methods shall be camel case."  I consider such concerns cosmetic, since they are appearance and not design or runtime behavior.  Cosmetic concerns are not important... unless they are.  If the team struggles to read code and becomes confused because of inconsistency, then such concerns become important.  If the occasional quirk presents no serious readability issues, then prescriptive declarations about it stifle more than they help.

    Having standards for your team's work product does not mean mandating total homogeneity.

    Why Have a Standard at All?

    Since I'm alluding to the potentially stifling effects of a team standard, you might reasonably ask why we should have them at all.  I can assert that I'm interested in the team being discerning, but is it really just about defining defaults?  Fair enough.  I'll make my case.

    First, consider something that I've already mentioned: maintenance.  If the team can easily read code, it can more easily maintain that code.  Logically, then, if the team all writes fairly similar code, they will all have an easier time reading, and thus maintaining that code.  A standard serves to nudge teams in this direction.

    Another important benefit of the team standard revolves around the integrity of the work product.  Many team's standards incorporate methodology for security, error handling, logging, etc.  Thus the established standard arms the team members with ways to ensure that the software behaves properly.

    And finally, well-done standards can help less experienced team members learn their craft.  When such people join the team, they tend to look to established folks for guidance.  Sadly, those people often have the most on their plate and the least time.  The standard can thus serve as teacher by proxy, letting everyone know the team's expectations for good code.

    Forget the Conformity (by Automating)

    So far, all of my rationale follows a fairly happy path.  Adopt a team standard, and reap the rewards: maintainability, better software, learning for newbies.  But equally important is avoiding the dark side of team standards.  Often this dark side takes the form of nitpicking, micromanagement and other petty bits of nastiness.

    Please, please, please remember that a standard should not elevate conformity as a virtue.  It should represent shared values and protection of work product quality.  Therefore, in situations where conformity (uniformity) is justified, you should automate it.  Don't make your collaborative time about telling people where to put spaces and brackets -- program your IDE to do that for you.

    Make Justification Part of the Standard

    Another critical way to remove the authoritarian vibe from the team standard is one that I rarely see.  And that mystifies me a bit because you can do it so easily.  Simply make sure you justify each item contained in the standard.

    "Methods with more than 20 line of code should prompt a conversation," might find a home in your standard.  But why not make it, "methods with more than 20 lines of code should prompt a conversation because studies have demonstrated that defect rate increases more than linearly with lines of code per method?"  Wow, talk about powerful.

    This little addition takes the authoritarian air out of the standard, and it also helps defuse squabbles.  And, best of all, people might just learn something.

    If you start doing this, you might also notice that boilerplate items in a lot of team standards become harder to justify.  "Prepend your class fields with m underscore" becomes "prepend your class fields with m underscore because... wait, why do we do that again?"

    Prune and Always Improve

    When you find yourself trailing off at because, you have a problem.  Something exists in your team standard that you can't justify.  If no one can justify it, then rip it out.  Seriously, get rid of it.  Having items that no one can justify starts to put you in conformity for the sake of conformity territory.  And that's when standard goes from "discerning" to "boring."

    Let this philosophy guide your standard in general.  Revisit it frequently, and audit it for valid justifications.  Sometimes justifications will age out of existence or seem lame in retrospect.  When this happens, do not hesitate to revisit, amend, or cull.  The best team standards are neither boring nor static.  The best team standards reflect the evolving, growing philosophy of the team.

    Related resources

    Tools at your disposal

    SubMain offers CodeIt.Right that easily integrates into Visual Studio for flexible and intuitive automated code review solution that works real-time, on demand, at the source control check-in or as part of your build.

    Learn more how CodeIt.Right can automate your team standards and improve code quality.

    About the Author

    Erik Dietrich

    I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

    posted on Thursday, 29 September 2016 07:41:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Tuesday, 20 September 2016

    If you write software, the term "feedback loop" might have made its way into your vocabulary.  It charts a slightly indirect route from its conception and into the developer lexicon, though, so let's start with the term's origin.  A feedback loop in general systems uses its output as one of its inputs.

    Kind of vague, huh?  I'll clarify with an example.  I'm actually writing this post from a hotel room, so I can see the air conditioner from my seat.  Charlotte, North Carolina, my temporary home, boasts some pretty steamy weather this time of year, so I'm giving the machine a workout.  Its LED display reads 70 Fahrenheit, and it's cranking to make that happen.

    When the AC unit hits exactly 70 degrees, as measured by its thermostat, it will take a break.  But as soon as the thermostat starts inching toward 71, it will turn itself back on and start working again.  Such is the Sisyphean struggle of climate control.

    Important for us here, though, is the mechanics of this system.  The AC unit alters the temperature in the room (its output).  But it also uses the temperature in the room as input (if < 71, do nothing, else cool the room).  Climate control in buildings operates via feedback loop.

    Appropriating the Term for Software Development

    It takes a bit of a cognitive leap to think of your own tradecraft in terms of feedback loops.  Most likely this happens because you become part of the system.  Most people find it harder to reason about things from within.

    In software development, you complete the loop.  You write code, the compiler builds it, the OS runs it, you observe the result, and decide what to do to the code next.  The output of that system becomes the input to drive the next round.

    If you have heard the term before, you've probably also heard the term "tightening the feedback loop."  Whether or not you've heard it, what people mean by this is reducing the cycle time of the aforementioned system.  People throwing that term around look to streamline the write->build->run->write again process.

    A History of Developer Feedback Loops

    At the risk of sounding like a grizzled old codger, let me digress for a moment to talk about feedback loop history.  Long before my time came the punched card era.  Without belaboring the point, I'll say that this feedback loop would astound you, the modern software developer.

    Programmers would sit at key punch "kiosks", used to physically perforate forms (one mistake, and you'd start over).  They would then take these forms and have operators turn them into cards, stacks of which they would hold onto.  Next, they'd wait in line to feed these cards into the machines, which acted as a runtime interpreter.   Often, they would have to wait up to 24 hours to see the output of what they had done.

    Can you imagine?  Write a bit of code, then wait for 24 hours to see if it worked.  With a feedback loop this loose, you can bet that checking and re-checking steps received hyper-optimization.

    blog-developer-feedback-loop

    When I went to college and started my programming career, these days had long passed.  But that doesn't mean my early days didn't involve a good bit of downtime.  I can recall modifying C files in projects I worked, and then waiting up to an hour for the code to build and run, depending what I had changed.  xkcd immortalized this issue nearly 10 years ago, in one of its most popular comics.

    Today, you don't see this as much, though certainly, you could find some legacy codebases or juggernauts that took a while to build.  Tooling, technique, modern hardware and architectural approaches all combine to minimize this problem via tighter feedback loops.

    The Worst Feedback Loop

    I have a hypothesis.  I believe that a specific amount of time exists for each person that represents the absolute, least-optimal amount of time for work feedback.  For me, it's about 40 seconds.

    If I make some changes to something and see immediate results, then great.  Beyond immediacy, my impatience kicks in.  I stare at the thing, I tap impatiently, I might even hit it a little, knowing no good will come.  But after about 40 seconds, I simply switch my attention elsewhere.

    Now, if I know the wait time will be longer than 40 seconds, I may develop some plan.  I might pipeline my work, or carve out some other tasks with which I can be productive while waiting.  If for instance, I can get feedback on something every 10 minutes, I'll kick it off, do some household chores, periodically checking on it.

    But, at 40 seconds, it resides in some kind of middle limbo, preventing any semblance of productivity.  I kick it off and check twitter.  40 seconds turns into 5 minutes when someone posts a link to some cool astronomy site.  I check back, forget what I did, and then remember.  I try again and wait 40 seconds.  This time, I look at a Buzzfeed article and waste 10 minutes as that turns into 4 Buzzfeed articles.  I then hate myself.

    The Importance of Tightening

    Why do I offer this story about my most sub-optimal feedback period?  To demonstrate the importance of diligence in tightening the loop.  Wasting a few seconds while waiting hinders you.  But waiting enough seconds to distract you with other things slaughters your productivity.

    With software development, you can get into a state of what I've heard described as "flow."  In a state of flow, the feedback loop creates harmony in what you're doing.  You make adjustments, get quick feedback, feel encouraged and productive, which promotes more concentration, more feedback, and more productivity.  You discover a virtuous circle.

    But just the slightest dropoff in the loop pops that bubble.  And, another dropoff from there (e.g. to 40 seconds for me) can render you borderline-useless.  So much of your professional performance rides on keeping the loop tight.

    Tighten Your Loop Further

    Modern tooling offers so many options for you.  Many IDEs will perform speculative compilation or interpretation as you code, making builds much faster.  GUI components can be rendered as you work, allowing you to see changes in real time as you alter the markup.  Unit tests slice your code into discrete, separately evaluated components, and continuous testing tools provide pass/fail feedback as you type.  Static code analysis tools offer you code review as you work, rather than at some code review days later.  I could go on.

    The general idea here is that you should constantly seek ways to tune your day to day work.  Keep your eyes out for tools that speed up your feedback loop.  Read blogs and go to user groups.  Watch your coworkers for tips and tricks.  Claw, scratch, and grapple your way to shaving time off of your feedback loop.

    We've come a long way from punch cards and sword fights while code compiles.  But, in 10 or 30 years, we'll look back in amazement at how archaic our current techniques seem.  Put yourself at the forefront of that curve, and you'll distinguish yourself as a developer.

    Learn more how CodeIt.Right can tighten the feedback loop and improve your code quality.

    About the Author

    Erik Dietrich

    I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

    posted on Tuesday, 20 September 2016 07:37:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Friday, 16 September 2016

    Version 5.3 of GhostDoc is a maintenance update for the v5.0 users:

    • Added full support for string interpolation in C# and VB parsers
    • Added support for "arrow functions" in JavaScript parser
    • Fixed "File is not part of a solution" issue when loading projects
    • (Pro) (Ent) Added IsAbstract property to CurrentCodeElement in the T4 templates
    • Improved exception documentation - now the type name in a nameof() parameter is added as part of the generated documentation template
    • (Ent) Fixed iue when using <section> along with <code> elements in an .aml file

    For the complete list of changes, please see What's New in GhostDoc v5

    For overview of the v5.0 features, visit Overview of GhostDoc v5.0 Features

    Download the new build at http://submain.com/download/ghostdoc/

    posted on Friday, 16 September 2016 08:30:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Wednesday, 14 September 2016

    Think back to college (or high school, if applicable).  Do you remember that kid that would sit near the front of the class and gleefully point out that the professor had accidentally omitted an apostrophe when writing notes on the white board?  Didn't you just love that kid?  Yeah, me neither.

    Fate imbues a small percentage of the population with a neurotic need to correct any perceived mistakes made by anyone.  XKCD immortalized this phenomenon with one of its most famous cartoons, that declared, "someone is wrong on the internet."  For the rest of the population, however, this tendency seems pedantic and, dare I say, unpleasant.  Just let it go, man.  It doesn't matter that much.

    I mention all of this to add context to the remainder of the post.  I work as a consultant and understand the need for diplomacy, tact, and choosing one's battles.  So, I do not propose something like care with spelling lightly.  But I will propose it, nonetheless.

    Now I know what you're thinking.  How can caring about spelling in code be anything but pedantic?  We're not talking about something being put together to impress a wide audience, like a newspaper.  In fact, we're not even talking about prose.  And code contains all sorts of abbreviations and encodings and whatnot.

    Nevertheless, it matters.  When English words occur in your code, spelling them right matters.  I'll use the rest of this post to make my case.

    The IntelliSense Conundrum

    If you use Visual Studio, no doubt you make heavy use of IntelliSense.  To expand, any IDE or text editor with autocomplete functionality qualifies for consideration here.  In either case, your tooling gives you a pretty substantial boost by suggesting methods/variables/classes/etc based on what you have typed.  It's like type-ahead for code.

    Now think of the effect a misspelling can have here, particularly near the beginning of a word.  Imagine implementing a method that would release resources and accidentally typing Colse instead of Close.  Now imagine consuming that method.  If you're used to exploring APIs and available methods with auto-complete, you might type, "Clo", pause, and see no matching methods.  You might then conclude, "hey, no call to Close needed!"

    In all likelihood, such an error would result in a few minutes of head-scratching and then the right call.  But even if that's the worst of it, that's still not great.  And it will happen each and every time someone uses your code.

    Other Manual Typing Errors

    The scope of this particular issue goes beyond auto-complete functionality.  Perhaps you lack that functionality in your environment, or perhaps you simply don't use it much.  In that case, you'll be hand typing your code.

    Now, imagine hand typing the call above to a close method.  Do you instinctively type "Colse" or do you instinctively type "Close?"  So what do you think will happen?

    You'll expect the call to be Close and you'll type that.  Then, you'll stare in disbelief for a moment at the compiler message.  You'll probably do a clean and rebuild.  You'll stare again for a while and squint.  Then, finally, you'll smack your forehead, realize the problem, and silently berate the person who misspelled the method name.

    Again, the impact remains the same.  Most likely this creates only friction and annoyance.  Every now and then, it may trigger a thoroughly incorrect use of a library or API.

    Anchoring Effect

    Moving away from the theme of confusion when using a declared member, consider the declaration itself.  During the use of a variable/method/class/etc, you must spell it right before the compiler allows you to proceed (assuming a strongly typed language).  With the original declaration, however, you have the freedom to spell things wrong to your heart's content.  When you do this, the original copy holds the error.

    That first misspelling allows for easy correction.  Same goes when you've used it only a time or two.  But as usage grows and spreads throughout the codebase, the fix becomes more and more of a chore.  Before long (and without easy refactoring tools), the chore becomes more than anyone feels like tackling, and the error calcifies in place.

    Your unaddressed spelling mistake today makes fixes more difficult tomorrow.

    Comprehension Confusion

    Let's switch gears again and consider the case of a maintenance programmer reading for comprehension.  After all, programmers do a whole lot more reading of code than they do modification of it.  So, a casual read is a likely situation.

    Spelling errors cloud comprehension.  A simple transposition of characters or a common error, such as referring to a "dependency" do not present an insurmountable problem.  But a truly mangled word can leave readers scratching their heads and wondering what the code actually means, almost as if you'd left some kind of brutal Hungarian notation in there.

    Taking the time to get the spelling right ensures that anyone maintaining the code will not have this difficulty.  Code is hard enough to understand, as-is, without adding unforced errors to the mix.

    The Embarrassment Factor

    And, finally, there's the embarrassment factor.  And I don't mean the embarrassment of your coworkers saying, "wow, that guy doesn't know how to spell!"  I'm talking about the embarrassment factor for the team.

    Think of new developers hiring on or transferring into the group.  They're going to take a look at the code and draw conclusions, about your team.  Software developers tend to have exacting, detail-oriented minds, and they tend to notice mistakes.  Having a bunch of spelling mistakes in common words makes it appear either that the team doesn't know how to spell or that it has a sloppy approach.  Neither of those is great.

    But also keep in mind that what happens in the code doesn't always stay in the code.  Bits of the code you write might appear on team dashboards, build reports, unit test run outputs, etc.  People from outside of the team may be examining acceptance tests and the like.  And, you may have end-user documentation generated automatically using your code (i.e. if you make developer tools or APIs).  Do you really want the documentation you hand to your customers to contain embarrassing mistakes?

    It's Easy to Get Right

    At this point, I'm finished with the supply of arguments for making the case.  I've laid these out.

    But, by way of closing words, I'd like to comment on what might be the biggest shame of the whole thing.  Purging your code of spelling errors doesn't require you to be an expert speller.  It doesn't require you to copy source code into MS Word or something and run a check.  You have tools at your disposal that will do this for you, right in your IDE.  All you need to do is turn them on.

    I recommend that you do this immediately.  It's easy, unobtrusive, and offers only upside.  And not only will you excise spelling mistakes from your code -- you'll also prevent that annoying kid in the front of the class from bothering you about stuff you don't have time for.

    Learn more about GhostDoc's truly source code spell checker and eliminate embarrassing typos in your apps and documentation before you ship them.

    About the Author

    Erik Dietrich

    I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

    posted on Wednesday, 14 September 2016 07:06:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Wednesday, 24 August 2016

    In the world of programming, 15 years or so of professional experience makes me a grizzled veteran.  That certainly does not hold for the work force in general, but youth dominates our industry via the absolute explosion of demand for new programmers.  Given the tendency of developers to move around between projects and companies, 15 years have shown me a great deal of variety.

    Perhaps nothing has exemplified this variety more than the code review.  I've participated in code reviews that were grueling, depressing marathons.  On the flip side, I've participated in ones where I learned things that would prove valuable to my career.  And I've seen just about everything in between.

    Our industry has come to accept that peer review works.  In the book Code Complete, author Steve McConnell cites it, in some circumstance, as the single most effective technique for avoiding defects.  And, of course, it helps with knowledge transfer and learning.  But here's the rub -- implemented poorly, it can also do a lot of harm.

    Today, I'd like to make the case for the automated code review.  Let me be clear.  I do not view this as a replacement for any manual code review, but as a supplement and another tool in the tool chest.  But I will say that automated code review carries less risk than its manual counterpart of having negative consequences.

    The Politics

    I mentioned extremely productive code reviews.  For me, this occurred when working on a team with those I considered friends.  I solicited opinions, got earnest feedback, and learned.  It felt like a group of people working to get better, and that seemed to have no downside.

    But I've seen the opposite, too.  I've worked in environments where the air seemed politically charged and competitive.  Code reviews became religious wars, turf battles, and arguments over minutiae.  Morale dipped, and some people went out of their way to find ways not to participate.  Clearly no one would view this as a productive situation.

    With automated code review, no politics exist.  Your review tool is, of course, incapable of playing politics.  It simply carries out its mission on your behalf.  Automating parts of the code review process -- especially something relatively arbitrary such as coding standards compliance -- can give a team many fewer opportunities to posture and bicker.

    Learning May Be Easier

    As an interpersonal activity, code review carries some social risk.  If we make a silly mistake, we worry that our peers will think less of us.  This dynamic is mitigated in environments with a high trust factor, but it exists nonetheless.  In more toxic environments, it dominates.

    Having an automated code review tool creates an opportunity for consequence-free learning.  Just as the tool plays no politics, it offers no judgment.  It just provides feedback, quietly and anonymously.

    Even in teams with a supportive dynamic, shy or nervous folks may prefer this paradigm.  I'd imagine that anyone would, to an extent.  An automated code review tool points out mistakes via a fast feedback loop and offers consequence-free opportunity to correct them and learn.

    Catching Everything

    So far I've discussed ways to cut down on politics and soothe morale, but practical concerns also bear mentioning.  An automated code review tool necessarily lacks the judgment that a human has.  But it has more thoroughness.

    If your team only performs peer review as a check, it will certainly catch mistakes and design problems.  But will it catch all of them?  Or is it possible that you might miss one possible null dereference or an empty catch block?  If you automate the process, then the answer becomes "no, it is not possible."

    For the items in a code review that you can automate, you should, for the sake of thoroughness.

    Saving Resources and Effort

    Human code review requires time and resources.  The team must book a room, coordinate schedules, use a projector (presumably), and assemble in the same location.  Of course, allowing for remote, asynchronous code review mitigates this somewhat, but it can't eliminate the salary dollars spent on the activity.  However you slice it, code review represents an investment.

    In this sense, automating parts of the code review process has a straightforward business component.  Whenever possible and economical, save yourself manual labor through automation.

    When there are code quality and practice checks that can be done automatically, do them automatically.  And it might surprise you to learn just how many such things can be automated.

    Improbable as it may seem, I have sat in code reviews where people argued about whether or not a method would exhibit a runtime behavior, given certain inputs.  "Why not write a unit test with those inputs," I asked.  Nobody benefits from humans reasoning about something the build, the test suite, the compiler, or a static analysis tool could tell them automatically.

    Complimentary Approach

    As I've mentioned throughout this post, automated code review and manual code review do not directly compete.  Humans solve some problems better than machines, and vice-versa.  To achieve the best of all worlds, you need to create a complimentary code review approach.

    First, understand what can be automated, or, at least, develop a good working framework for guessing.  Coding standard compliance, for instance, is a no-brainer from an automation perspective.  You do not need to pay humans to figure out whether variable names are properly cased, so let a review tool do it for you.  You can learn more about the possibilities by simply downloading and trying out review and analysis tools.

    Secondly, socialize the tooling with the team so that they understand the distinction as well.  Encourage them not to waste time making a code review a matter of checking things off of a list.  Instead, manual code review should focus on architectural and practice considerations.  Could this class have fewer responsibilities?  Is the builder pattern a good fit here?  Are we concerned about too many dependencies?

    Finally, I'll offer the advice that you can use the balance between manual and automated review based on the team's morale.  Do they suffer from code review fatigue?  Have you noticed them sniping a lot?  If so, perhaps lean more heavily on automated review.  Otherwise, use the automated review tools simply to save time on things that can be automated.

    If you're currently not using any automated analysis tools, I cannot overstate how important it is that you check them out.  Our industry built itself entirely on the premise of automating time-consuming manual activities.  We need to eat our own dog food.

    Related resources

    Tools at your disposal

    SubMain offers CodeIt.Right that easily integrates into Visual Studio for flexible and intuitive automated code review solution that works real-time, on demand, at the source control check-in or as part of your build.

    Learn more how CodeIt.Right can help with automated code review and improve your code quality.

    About the Author

    Erik Dietrich

    I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

    posted on Wednesday, 24 August 2016 14:06:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Thursday, 18 August 2016

    Notwithstanding some oddball calculator and hobby PC hacking, my first serious programming experience came in college.  A course called "Intro to C++" got us acquainted with arrays, loops, data structures and the like.  Given its introductory nature, this class did not pose a particularly serious challenge (that would come later).  So, with all of the maturity generally possessed by 18 year olds, we had a bit of fun.

    I recall contests to see how much application logic we could jam into the loop conditions, and contests to see how much code could be packed onto one line.  These sorts of scavenger hunt activities obviously produced dense, illegible code.  But then, that was kind of the point.

    Beyond these silly hijinks, however, a culture of code illegibility permeated this (and, I would learn later) other campuses.  Professors nominally encouraged code readability.  After all, such comments facilitated partial credit in the event of a half-baked homework submission.  But, even still, the mystique of the ingenious but inscrutable algorithm pervaded the culture both for students and faculty.  I had occasion to see code written by various professors, and I noticed no comments that I can recall.

    Professionalism via Thoroughness

    When I graduated from college, I carried this culture with me.  But not for long.  I took a job where I spent most of my days working on driver and kernel module programming.  There, I noticed that the grizzled veterans to whom I looked up meticulously documented their code.  Above each function sat a neat, orderly comment containing information about its purpose, parameters, return values, and modification history.

    This, I realized, was how professionals conducted themselves.  I was hooked.  Fresh out of college, and looking to impress the world, I sought to distinguish myself from my undisciplined student ways.  This decision ushered in a period of many years in which I documented my code with near religious fervor.

    My habit included, obviously, the method headers that I emulated.  But on top of that, I added class headers and regularly peppered my code with line comments that offered such wisdom as "increment the loop counter until the end of the array."  (Okay, probably not that bad, but you get the idea).  I also wrote lengthy readme documents for posterity and maintenance programmers alike.  My professionalism knew no bounds.

    Clean Code as Plot Twist

    Eventually, I moved on from that job, but carried my habits with me.  I wrote different code for different purposes in different domains, but stayed consistent in my commenting diligence.  This I wore as a badge of pride.

    While I was growing in my career, I started to draw inspiration from the clean code movement.  I began to write unit tests, I practiced the SOLID principles, I watched Uncle Bob talks, made my methods small, and sought to convince others to do the same.  Through it all, I continued to write comments.

    But then something disconcerting happened.  In the clean code circles I followed and aspired to, I started to see posts like this one.  In it, the author had written extensively about comments as a code smell.

    Comments are a great example of something that seems like a Good Thing, but turn out to cause more harm than good.

    For a while, I dismissed this heresy as an exception to the general right-thinking of the clean code movement.  I ignored it.  But it nagged at me nonetheless, and eventually, I had to confront it.

    When I finally did, I realized that I had continued to double down on a practice simply because I had done it for so long.  In other words, the extensive commenting represented a ritual of diligence rather than something in which I genuinely saw value.

    Down with Comments

    Once the floodgates had opened, I did an about-face.  I completely stopped writing comments of any sort whatsoever, unless it was part of the standard of the group I was working with.

    The clean coder rationale flooded over me and made sense.  Instead of writing inline comments, make the code self-documenting.  Instead of comments in general, write unit and acceptance tests that describe the desired behaviors.  If you need to explain in English what your code does, you have failed to explain with your code.

    Probably most compelling of all, though, was the tendency that I'd noticed for comments to rot.  I cannot begin to estimate how many times I dutifully wrote comments about a method, only to return a year later and see that the method had been changed while the comments had not.  My once-helpful comments now lied to anyone reading them, making me look either negligent or like an idiot.  Comments represented duplication of knowledge, and duplication of knowledge did what it always does: gets out of sync.

    My commenting days were over.

    Best of All Worlds

    That still holds true to this day.  I do not comment my code in the traditional sense.  Instead, I write copious amounts of unit, integration and acceptance tests to demonstrate intent.  And, where necessary and valuable, I generate documentation.

    Let's not confuse documentation and commenting.  Commenting code targets maintenance programmers and team members as the intended audience.  Documenting, on the other hand, targets external consumers.  For instance, if I maintained a library at a large organization, and other teams used that library, they would be external consumers rather than team members.  In effect, they constitute customers.

    If we think of API consumers as customers, then generating examples and documentation becomes critically important.  In a sense, this activity is the equivalent of designing an intuitive interface for end-users of a GUI application.  They need to understand how to quickly and effectively make the most of what you offer.

    So if you're like me -- if you believe firmly in the tenets of the clean code movement -- understand that comments and documentation are not the same thing.  Also understand that documentation has real, business value and occupies an important role in what we do.  Documentation may take the form of actual help documents, files, or XML-doc style comments that appear in IntelliSense implementations.

    To achieve the best of all worlds, avoid duplication.  Make publishing documentation and examples a part of your process and, better yet, automate these activities.  Your code will stay clean and maintainable and your API users will be well-informed and empowered to use your code.

    Learn more about how GhostDoc can help simplify your XML Comments, produce and maintain quality help documentation.

    About the Author

    Erik Dietrich

    I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

    posted on Thursday, 18 August 2016 07:45:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Monday, 18 July 2016

    Version 5.2 of GhostDoc is a minor feature release for the v5.0 users includes:

    • Support for Visual Studio 2015 Update 3
    • Fixes for the latest ASP.NET Core projects
    • GhostDoc now treats underscore as a delimiter to improve summary generation for underscore delimited identifiers
    • "Use Modern URLs" Help Configuration option for declarative help documentation file naming - namespace-classname-membername.htm
    • Option to turn on/off Documentation Hints during setup
    • (Pro) (Ent)Comment Preview is now rendered using the FlatGray theme
    • Plenty of improvements and bug fixes

    For the complete list of changes, please see What's New in GhostDoc v5

    For overview of the v5.0 features, visit Overview of GhostDoc v5.0 Features

    This version is a required update for Visual Studio 2015 Update 3 users.

    Download the new build at http://submain.com/download/ghostdoc/

    posted on Monday, 18 July 2016 18:07:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Wednesday, 04 May 2016

    The Beta for CodeIt.Right v3 has arrived – the new major version of our automated code review and code quality analysis product. Here are the new version highlights:

    • Official support for VS2015 Update 2 and ASP.NET 5/ASP.NET Core 1.0 solutions
    • New Review Code commands:
      • only opened files
      • only checked out files
      • only files modified after specific date
    • Improved Profile Editor with advanced rule search and filtering
    • Improved look and feel for Violations Report and Editor violation markers
    • New rules
    • Setting to keep the OnDemand and Instant Review profiles in sync
    • New Jenkins integration plugin
    • Batch correction is now turned off by default
    • Most every CodeIt.Right action now can be assigned a keyboard shortcut
    • Preview of the new Dashboard feature

    For the complete and detailed list of the v3.0 changes see What's New in CodeIt.Right v3.0

    To give the v3.0 Beta a try, download it here - http://submain.com/download/codeit.right/beta/

    Please Note: while our early adopters indicate that the v3.0 Beta has been very stable for them, still, all the usual Beta software advisory provisions apply.

     

    New Review Code commands

    cir3-baseline-filtering

    We have renamed the Start Analysis menu to Review Code – still the same feature and the new name is just highlighting the automated code review nature of the product. The

    • Analyze Open Files command - analyze only the files opened in Visual Studio tabs
    • Analyze Checked Out Files command - analyze only files that that are checked out from the source control
    • Analyze Modified After – analyze only files that have been modified after specific date

    Known Beta issue – when pressed Update only updates the code review criteria but still requires to run the Review Code command manually. In the release version we will run code review when the Update is pressed.

     

    cir3-profile-filter

    Improved Profile Editor

    The Profile Editor now features

    • Advanced rule filtering by rule id, title, name, severity, scope, target, and programming language
    • Allows to quickly show only active, only inactive or all rules in the profile
    • Shows totals for the profile rules - total, active, and filtered
    • Improved adding rules with multiple categories

     

    Dashboard Preview

    While is not what we see it finally looking, an early preview of the Dashboard feature has been shipped with the Beta to give you a rough idea what we are after – provide you with a code quality dashboard view that you customize to your needs.

     

    Feedback

    We would love to hear your feedback on the new features! Please email it to us at support@submain.com or post in the CodeIt.Right v3 Beta Forum.

    .

    posted on Wednesday, 04 May 2016 06:31:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Friday, 05 February 2016

    Version 5.1 of GhostDoc is a maintenance release for the v5.0 users; includes minor enhancements and number of important bug fixes. Many of the fixes are relevant to the Visual Studio 2015 environment, so while everyone will benefit from this update, it is highly recommended for the Visual Studio 2015 users.

    For the complete list of changes, please see http://support.submain.com/kb/a42/whats-new-in-ghostdoc-v5.aspx

    For overview of the v5.0 features, visit http://submain.com/blog/ReleasedGhostDocV50.aspx

    posted on Friday, 05 February 2016 19:33:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Monday, 23 November 2015
    Note to GhostDoc Pro v4 users: The v4 licenses won’t work with the v5. We have sent out the v5 license codes to users with License Protection and active Software Assurance subscription. If you have not received or misplaced your new license, you can retrieve it on the My Account page. See more information at the bottom of this post.

    Both Pro and Enterprise editions of GhostDoc in version 5 introduce Documentation Quality hints in Visual Studio editor; Documentation Management assistance - find auto-generated comments, edit or remove the bulk created docs; identify and fix comments that are missing, out of sync or can be copied from base class; mark generated XML comments as auto-generated and "to be edited". The v5 also includes multiple Help documentation themes and layouts to choose from.

    The free version of GhostDoc has been re-branded as GhostDoc Community Edition and adds general improvements, limited generation of CHM help documentation as well as the means to find auto-generated comments.

    GD_v5_new_commands

    The new menu commands

    • Documentation Quality Hints in Visual Studio editor
    • Documentation Maintenance - Find auto-generated comments - edit or remove the bulk created docs
    • Documentation Maintenance - Identify and fix comments that are missing, out of sync or can be copied from base class
    • Theme support for generated help documentation and new themes - Flat Gray and Flat Main
    • Official Visual Studio 2015 support
    • Options to add Auto-generated doc and TODO 'Edit' attributes
    • Option to have the default summary text focused and selected when using Document This command - allows to edit/override the summary quickly
    • Exclude from Documentation action – marks a member with a tag to exclude it from the help documentation
    • Hide/Show Comments feature – an easy way to expand/collapse documentation comments to minimize the XML Comments footprint in the Visual Studio code editor
    • New Summary Override table in Options - configure predefined summaries for specific member or type names instead of auto-generated
    • A basic Build Documentation feature is now available in the Community Edition of GhostDoc – while quite limited and watermarked, yet allows to produce simple CHM help documentation for personal use without paying for the commercial version

    For the detailed list of v5.0 changes see What’s New in GhostDoc v5.

    To see new features by product edition see this page - http://submain.com/ghostdoc/editions/


    Documentation Quality Hints

    This new feature provides real-time visual hints in the Visual Studio Editor window to highlight members which have documentation issues that require attention.

    GD_v5_maint_hints

    The following documentation hint actions included with this release make it very easy to maintain the documentation quality:

    GD_v5_maint_hints_list


    Documentation Maintenance

    This feature will help you identify missing documentation, find auto-generated XML comments, maintain your documentation, and keep it up to date. Once these are found, GhostDoc provides the tools to edit or remove the bulk created docs, add missing or fix the dated documentation – one by one or as a batch. You can fine tune the search criteria and use your own template library if yours is different from the built-in.

    • Find auto-generated docs and edit or remove them
    • Find and fix members that are missing documentation
    • Discover members that have parameters, return types, and type parameters out of sync with the existing XML comments and fix the comments
    • Find members that can have XML docs copied from the base class
    • Find documentation that require editing

    GD_v5_maint_autogen

    The Community Edition only allows to find auto-generated documentation and not batch actions – only one action at a time.


    Help Documentation Themes

    In the v5 we are introducing theme support for the generated help documentation and including two new themes, The old help doc view preserved as the Classic theme. You can see the new theme preview here - Flat Gray (default) and Flat Main.

    The Enterprise Edition users can modify the existing themes or create and deploy own help documentation themes – now easier than ever!

    The Community Edition theme selection is limited to one – Classic.

    GD_v5_help_sample


    Auto-generated doc and TODO 'Edit' attributes

    The option to add tag to XML comment is intended to provide an explicit flag that the comment has been generated automatically.

    The option to add a TODO comment “TODO Edit XML Comment Template for {member name}” which in turn adds a TODO task into the Visual Studio Task List –> Comments as a reminder for the auto-generated comment requires editing.

    GD_v5_autogen_todo

    Both flags can be used as additional criteria for the documentation quality hints and documentation management “Find auto-generated Documentation” feature. When generating help documentation these flags are also accounted for – the flagged members can be included, ignored or highlighted in the final docs.


    Summary Override

    The Summary Override table allows to configure predefined summaries for specific member or type names to be used instead of the auto-generated. We ship some predefined summary overrides and you are welcome to add your own. If you find a summary override that the GhostDoc user community can benefit of, please submit it to us to be reviewed for the inclusion.

    GD_v5_summary_override

     

    How do I try it?

    Download the v5.0 at http://submain.com/download/ghostdoc/


    Feedback is what keeps us going!

    Let us know what you think of the new version here - http://submain.com/support/feedback/


    Note to the GhostDoc Pro v4 users
    : The v4.x license codes won't work with the v5.0. For users with License Protection and active Software Assurance subscription we have sent out the v5.x license codes. If you have not received or misplaced your new license, you can retrieve it on the My Account page. Users without the License Protection or with expired Software Assurance subscription will need to purchase the new version - currently we are not offering upgrade path other than the Software Assurance subscription. For information about the upgrade protection see our Software Assurance and Support - Renewal / Reinstatement Terms

    posted on Monday, 23 November 2015 20:02:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     

     
         
     
    Home |  Products |  Services |  Download |  Purchase |  Support |  Community |  About Us |