Have a question? Email Us or Call 1 (800) 936-2134
SubMain - CodeIt.Right The First Time! Home Products Services Download Purchase Support Community
 Thursday, 27 April 2017

Many of us have a natural tendency to let little things pile up.  This gives rise to the notion of the so-called spring cleaning.  The weather turns warm and going outside becomes reasonable, so we take the opportunity to do some kind of deep cleaning. blog-spring-cleaning-your-code-review

Of course, this may not apply to you.  Perhaps you keep your house impeccable at all times, or maybe you simply have a cleaning service.  But I'll bet that, in some part of your life or another, you put little things off until they become bigger things.  Your cruft may not involve dusty shelves and pockets of house clutter, but it probably exists somewhere.

Maybe it exists in your professional life in some capacity.  Perhaps you have a string of half written blog posts, or your inbox has more than a thousand messages.  And, if you examine things honestly, you almost certainly have some item that has been skulking around your to-do list for months.  Somewhere, we all have items that could use some tidying, cognitive or physical.

With that in mind, I'd like to talk about your code review process.  Have you been executing it like clockwork for months or years?  Perhaps it has become too much like clockwork.  Turn a critical eye to it, and you might realize elements of it have become stale or superfluous.  So let's take a look at how you can apply a spring cleaning to your code review process.

Beware The Cargo Cult

During World War II, the Allies set up a temporary air base on an island in the Pacific Ocean.  The people living on the Island observed the ground controllers waving at inbound planes to help them land.  Supplies then followed.  Not understand the purpose of this ritual or the mechanics of airplanes, the locals learned that making these motions brought planes with supplies.  So after the allies left, they mimicked the behavior, hoping for additional resources.  This execution of ritual without understanding earned the designation "cargo cult."

In the world of software development, cargo cult programming involves adding code without understanding what it does.  You added it once, good things happened, so now you always add it.  You can think of this as a special case of programming by coincidence.  And it's something you should avoid.

But cargo cult mentality can crop up in a code review as well.  Do you find your team calling out 'issues' during the review, but, if pressed, nobody could articulate why those are issues?  If so, you have a cargo cult practice, and you should cull it.

Going Over the Same Stuff Repetitively

Let's say that your team performs code review on a regular basis.  Does this involve an ongoing, constant uplift?  In other words, do you find learning spreads among the team, and you collectively sharpen your game and constantly improve?  Or do you find that the team calls out the same old issues again and again?

If every code review involves noticing a method parameter dereference and saying, "you'll get an exception if someone passes in null," then you have stagnation.  Think of this as a team smell.  Why do people keep making the same mistake over and over again?  Why haven't you somehow operationalized a remedy?  And, couldn't someone have automated this?

Keep an eye out for this sort of thing.  If you notice it, pause and do some root cause analysis.  Don't just fix the issue itself -- fix it so the issue stops happening.

Inconsistency in Reviews

Another common source of woe arises from inconsistency in the code review process.  Not only does this result in potential issues within the code, but it also threatens to demoralize members of the team.  Imagine attending a review and having someone admonish you to add logging calls to all of your methods.  But then, during the next review, someone gives you a hard time about logging too much.  Enough of that nonsense and team members start updating their resumes rather than their methods.

And inconsistency can mean more than just different review styles from different people (or the same person on different days, varying by mood).  You might find that your team's behavior and suggestions during review have become out of sync with a formal document like the team's coding standard.  Whatever the source, inconsistency creates drag for your team.

Take the opportunity of a metaphorical spring cleaning to address this potential pitfall.  Round up the team members and make sure they all have the same philosophies at code review time.  And then, make sure that unified philosophy lines up with anything documented.

Cut Out the Nitpicking

I've yet to see an organization where interpersonal code review didn't become at least a little political.  That makes sense, of course.  In essence, you're talking about an activity where people get together and offer (hopefully) constructive professional criticism.

Because of the politics, personal code review can degenerate and lead to infighting in numerous ways.  Chief among these, I've found, is excessive nitpicking.  If team members perceive the activity as a never ending string of officious criticism, they start to hate coming to work.

On top of that, people can only internalize so many lessons in a sitting.  After a while, they start to tune out or get tired.  So make the takeaways from the code review count.  Even if they haven't gotten every little thing just so, pick your battles and focus on big things.  And I file this under spring cleaning since it generally requires a concerted mental adjustment and since it will clear some of the cruft out of your review.

Automate, Automate, Automate

I will conclude by offering what I consider the most important item for any code review spring cleaning.  If the other suggestions in involved metaphorical shelf dusting and shower scrubbing, think of this one as completely cleaning out an entire room that you had loaded with junk.

So much of the time teams spend in code review seems to trend toward picking at nits.  But even when it involves more substantive considerations, many of these considerations could be automatically detected.  The team wastes precious time peering at the code and playing static analyzer.  Stop this!

Spruce up your review process by automating as much of it as humanly possible.  You should constantly ask yourself if the issue you're discussing could be automatically detected (and fixed).  If you think it could, then do it.  And, as part of your spring cleaning, knock out as many of these as possible.

Save human-centric code review for focus on design considerations, architectural discussions, and big picture issues.  Don't bog yourself down in cruft.  You'll all feel a lot cleaner and happier for it, just as you would after any spring cleaning.

Tools at your disposal

SubMain offers CodeIt.Right that easily integrates into Visual Studio for flexible and intuitive automated code review solution that works real-time, on demand, at the source control check-in or as part of your build.

Related resources

Learn more how CodeIt.Right can help you automate code reviews and improve the quality of your code.

About the Author

Erik Dietrich

I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

posted on Thursday, 27 April 2017 07:10:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
 Wednesday, 19 April 2017

Today, I'll do another installment of the CodeIt.Right Rules, Explained series.  This is post number five in the series.  And, as always, I'll start off by citing my two personal rules about static analysis guidance, along with the explanation for them.

  • Never implement a suggested fix without knowing what makes it a fix.
  • Never ignore a suggested fix without understanding what makes it a fix.

It may seem as though I'm playing rhetorical games here.  After all, I could simply say, "learn the reasoning behind all suggested fixes."  But I want to underscore the decision you face when confronted with static analysis feedback.  In all cases, you must actively choose to ignore the feedback or address it.  And for both options, you need to understand the logic behind the suggestion.

In that spirit, I'm going to offer up explanations for three more CodeIt.Right rules today.

Mark ISerializable Types with "Serializable" Attribute

If you run across this rule, you might do so while writing an exception class.  For example, the following small bit of code in a project of mine triggers it.

public class GithubQueryingException : Exception
{
    public GithubQueryingException(string message, Exception ex) : base(message, ex)
    {
            
    }
}

It seems pretty innocuous, right?  Well, let's take a look at what went wrong.

The rule actually describes its own solution pretty well.  Slap a serializable attribute on this exception class and make the tool happy.  But who cares?  Why does it matter if you don't mark the exception as serializable?

To understand the issue, you need awareness of a concept called "application domains" within the .NET framework.  Going into much detail about this would take us beyond the scope of the post.  But suffice it to say, "application domains provide an isolation boundary for security, reliability, and versioning, and for unloading assemblies."  Think two separate processes running and collaborating.

If some external process will call your code, it won't access and deal with your objects the same way that your own code will.  Instead, it needs to communicate by serializing the object and passing it along as if over some remote service call.  In the case of the exception above, it lacks the attribute marking it explicitly as serializable, in spite of implementing that interface.  So bad things will happen at runtime.  And this warning exists to give you the heads up.

If you'll only ever handle this exception within the same app domain, it won't cause you any heartburn.  But, then again, neither will adding an attribute to your class.

Do Not Handle Non-CLS-Compliant Exceptions

Have you ever written code that looks something like this?

try
{
    DoSomething();
    return true;
}
catch
{
    return false;
}

In essence, you want to take a stab at doing something and return true if it goes well and false if anything goes wrong.  So you write code that looks something like the above.

If you you have, you'll run afoul of the CodeIt.Right rule, "do not handle non-cls-compliant exceptions."  You might find this confusing at first blush, particularly if you code exclusively in C# or Visual Basic.  This would confuse you because you cannot throw exceptions not compliant with the common language specification (CLS).  All exceptions you throw inherit from the Exception class and thus conform.

However, in the case of native code written in, say, C++, you can actually throw non-CLS-compliant exceptions.  And this code will catch them because you've said "catch anything that comes my way."  This earns you a warning.

The CodeIt.Right warning here resembles one telling you not to catch the general exception type.  You want to be intentional about what exceptions you trap, rather than casting an overly wide net.  You can fix this easily enough by specifying the actual exception you anticipate might occur.

Async Methods Should Return Task or Task<T>

As of .NET Framework 4.5, you can use the async keyword to allow invocation of an asynchronous operation.  For example, imagine that you had a desktop GUI app and you wanted to populate a form with data.  But imagine that acquiring said data involved doing an expensive and time consuming call over a network.

With synchronous programming, the call out to the network would block, meaning that everything else would grind to a halt to wait on the network call... including the GUI's responsiveness.  That makes for a terrible user experience.  Of course, we solved this problem long before the existence of the async keyword.  But we used laborious threading solutions to do that, whereas the async keyword makes this more intuitive.

Roughly speaking, designating a method as "async" indicates that you can dispatch it to conduct its business while you move on to do other things.  To accomplish this, the method synchronously returns something called a Task, which acts as a placeholder and a promise of sorts.  The calling method keeps a reference to the Task and can use it to get at the result of the method, once the asynchronous operation completes.

But that only works if you return a Task or Task<T>.  If, instead, you create a void method and label it asynchronous, you have no means to get at it later and no means to explicitly wait on it.  There's a good chance this isn't what you want to do, and CodeIt.Right lets you know that.  In the case of an event handler, you might actually want to do this, but better safe than sorry.  You can fix the violation by returning a non-parameterized Task rather than declaring the method void.

Until Next Time

This post covered some interesting language and framework features.  We looked at the effect of crossing app domain boundaries and what that does to the objects whose structure you can easily take for granted.  Then we went off the beaten path a little by looking at something unexpected that can happen at the intersection of managed and native code.  And, finally, we delved into asynchronous programming a bit.

As we wander through some of these relatively far-reaching concerns, it's nice to see that CodeIt.Right helps us keep track.  A good analysis tool not only helps you catch mistakes, but it also helps you expand your understanding of the language and framework.

Learn more how CodeIt.Right can help you automate code reviews and improve your code quality.

About the Author

Erik Dietrich

I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

posted on Wednesday, 19 April 2017 12:35:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
 Wednesday, 12 April 2017

I like variety.  In pursuit of this preference, I spend some time management consulting with enterprise clients and some time volunteering for "office hours" at a startup incubator.  Generally, this amounts to serving as "rent-a-CTO" for startup founders in half hour blocks.  This provides me with the spice of life, I guess.

As disparate as these advice forums might seem, they often share a common theme.  Both in the impressive enterprise buildings and the startup incubator conference rooms, people ask me about offshoring application development.  To go overseas or not to go overseas?  That, quite frequently, is the question (posed to me).

I find this pretty difficult to answer absent additional information.  In any context, people asking this bake two core assumptions into their question.  What they really want to say would sound more like this.  "Will I suffer for the choice to sacrifice quality to save money?"

They assume first that cheaper offshore work means lower quality.  And then they assume that you can trade quality for cost as if adjusting the volume dial in your car.  If only life worked this simply.

What You Know When You Offshore

Before going further, let's back up a bit.  I want to talk about what you actually know when you make the decision to pay overseas firms a lower rate to build software.  But first, let's dispel these assumptions that nobody can really justify.

Understand something unequivocally.  You cannot simply exchange units of "quality" for currency.  If you ask me to build you a web app, and I tell you that I'll do it for $30,000, you can't simply say, "I'll give you $15,000 to build one-half as good."  I mean, you could say that.  But you'd be saying something absurd, and you know it.  You can reasonably adjust cost by cutting scope, but not by assuming that "half as good" means "twice as fast."

Also, you need to understand that "cheap overseas labor" doesn't necessarily mean lower quality.  Frequently it does, but not always.  And, not even frequently enough that you can just bank on it.

So what do you know when you contract with an inexpensive, overseas provider?  Not a lot, actually.  But you do know that your partner will work with you mainly remotely, across a great deal of distance, and with significant communication obstacles.  You will not collaborate as closely with them as you would with an employee or a local vendor.

The (Non) Locality Conundrum

So you have a limited budget, and you go shopping for offshore app dev.  You go in knowing that you may deal with less skilled developers.  But honestly, most people dramatically overestimate the importance of that concern.

What tends to torpedo these projects lies more in the communication gulf and less in the skill.  You give them wireframes and vague instructions, and they come back with what they think you want.  They explain their deliveries with passable English in emails sent at 2:30 AM your time.  This collaboration proves taxing for both parties, so you both avoid it, for the most part.  You thus mutually collude to raise the stakes with each passing week.

Disaster then strikes at the end.  In a big bang, they deliver what they think you want, and it doesn't fit your expectations.  Or it fits your expectations, but you can't build on top of it.  You may later, using some revisionist history, consider this a matter of "software quality" but that misses the point.

Your problem really lies in the non-locality, both geographically and more philosophically.

When Software Projects Work

Software projects work well with a tight feedback loop.  The entire agile movement rests firmly atop this premise.  Stop shipping software once per year, and start shipping it once per week.  See what the customer/stakeholder thinks and course correct before it's too late.  This helps facilitate success far more than the vague notion of quality.

The locality issue detracts from the willingness to collaborate.  It encourages you to work in silos and save feedback for a later date.  It invites disaster.

To avoid this, you need to figure out a way to remove unknowns from the equation.  You need to know what your partner is doing from week to week.  And you need to know the nature of what they're building.  Have they assembled throwaway, prototype code?  Or do you have the foundation of the future?

Getting a Glimpse

At this point, the course for enterprises and startups diverge.  The enterprise has legions of software developers and can easily afford to fly to Eastern Europe or Southeast Asia or wherever the work gets done.  They want to leverage economies of scale to save money as a matter of policy.

The startup or small business, on the other hand, lacks these resources.  They can't just ask their legion of developers to review the offshore work more frequently.  And they certainly can't book a few business class tickets over there to check it out for themselves.  They need to get more creative.

In fact, some of the startup founders I counsel have a pretty bleak outlook here.  They have no one in their organization in a position to review code at all.  So they rely on an offshore partner for budget reasons, and they rely on that partner as expert adviser and service provider.  They put all of their eggs in that vendor's basket.  And they come to me asking, "have I made a good choice?"

They need a glimpse into what these offshore folks are doing, and one that they can understand.

Leveraging Automated Code Review

While you can't address the nebulous, subjective concept of "quality" wholesale, you can ascertain properties of code.  And you can even do it without a great deal of technical knowledge, yourself.  You could simply take their source code and run an automated code review on it.

You're probably thinking that this seems a bit reductionist.  Make no mistake -- it's quite reductionist.  But it also beats no feedback at all.

You could approach this by running the review on each incremental delivery.  Ask them to explain instances where it runs afoul of the tool.  Then keep doing it to see if they improve.  Or, you could ask them to incorporate the tool into their own process and make delivering issue-free code a part of the contract.  Neither of these things guarantees a successful result.  But at least it offers you something -- anything -- to help you evaluate the work, short of in-depth knowledge and study yourself.

Recall what I said earlier about how enterprises regard quality.  It's not as much about intrinsic properties, nor is it inversely proportional to cost.  Instead, quality shows itself in the presence of a tight feedback loop and the ability to sustain adding more and more capabilities.  With limited time and knowledge, automated code review gives you a way to tighten that feedback loop and align expectations.  It ensures at least some oversight, and it aligns the work they do with what you might expect from firms that know their business.

Tools at your disposal

SubMain offers CodeIt.Right that easily integrates into Visual Studio for flexible and intuitive automated code review solution that works real-time, on demand, at the source control check-in or as part of your build.

Related resources

Learn more how CodeIt.Right can help you automate code reviews and ensure the quality of delivered code.

About the Author

Erik Dietrich

I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

posted on Wednesday, 12 April 2017 12:06:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
 Wednesday, 05 April 2017

I can almost sense the indignation from some of you.  You read the title and then began to seethe a little.  Then you clicked the link to see what kind sophistry awaited you.  "There is no substitute for peer review."

Relax.  I agree with you.  In fact, I think that any robust review process should include a healthy amount of human and automated review.  And, of course, you also need your test pyramid, integration and deployment strategies, and the whole nine yards.  Having a truly mature software shop takes a great deal of work and involves standing on the shoulders of giants.  So, please, give me a little latitude with the premise of the post.

Today I want to talk about how one could replace manual code review with automated code review only, should the need arise.

Why Would The Need for This Arise?

You might struggle to imagine why this would ever prove necessary.  Those of you with many years logged in the enterprise in particular probably find this puzzling.  But you might find manual code inspection axed from your process for any number of reasons other than, "we've decided we don't value the activity."

First and most egregiously, a team's manager might come along with an eye toward cost savings.  "I need you to spend less time reading code and more time writing it!"  In that case, you'll need to move away from the practice, and going toward automation beats abandoning it altogether.  Of course, if that happens, I also recommend dusting off your resume.  In the first place, you have a penny-wise, pound-foolish manager.  And, secondly, management shouldn't micromanage you at this level.  Figuring out how to deliver good software should be your responsibility.

But let's consider less unfortunate situations.  Perhaps you currently work in a team of 2, and number 2 just handed in her two weeks’ notice.  Even if your organization back-fills your erstwhile teammate, you have some time before the newbie can meaningfully review your code.  Or, perhaps you work for a larger team, but everyone gradually becomes so busy and fragmented in responsibility as not to have the time for much manual peer review.

In my travels, this last case actually happens pretty frequently.  And then you have to choose: abandon the practice altogether, or move toward an automated version.  Pretty easy choice, if you ask me.

First, Take Inventory

Assuming no one has yet forced your hand, pause to take inventory.  What currently happens as part of your review process?  What sorts of feedback do you get?

If your reviews happen in some kind of asynchronous format, then great.  This should prove easy enough to capture since you'll need only to go through your emails or issues list or whatever you use.  Do you have in-person reviews, but chronicle the findings?  Just as good for our purposes here.

But if these reviews happen in more ad hoc fashion, then you have some work to do.  Start documenting the feedback and resultant action items.  After all, in order to create a suitable replacement strategy for an activity, you must first thoroughly understand that activity.

Automate the Automate-able

With your list in place, you can now start figuring out how to replace your expiring manual process.  First up, identify the things you can easily automate that come up during reviews.

This will include cosmetic concerns.  Does your code comply with the team standard?  Does it comply with typical styling for your tech stack?  Have you consistently cased and named things?  If that stuff comes up during your reviews, you should probably automate it anyway and not waste time discussing it.  But, going forward, you will need to automate it.

But you should also look for anything that you can leverage automation to catch.  Do you talk about methods getting too long or about not checking parameters for null before dereferencing?  You can also automate things like that.  How about compliance with non-cosmetic best practices?  Automate that as well with an automated code review tool.

And spend some time researching what you can automate.  Even if no analyzer or review tool catches something out of the box, you can often customize them to catch it (or write your own thing, if needed).

Checks and Balances for Conceptual Items

Now, we move onto the more difficult things.  "This method seems pretty unreadable."  "Couldn't you use the builder pattern here?"  I'm talking here about the sorts of things for which manual code review really shines and serves its purpose.  You'll have a harder time replacing this.  But that doesn't mean you can't do something.

First, I recommend that you audit the review history you've been compiling.  See what comes up the most frequently, and make a list of those things.  And group them conceptually.  If you see a lot of "couldn't you use Builder" and "couldn't you use Factory Method," then generalize to "couldn't you use a design pattern?"

Once you have this list, if nothing else, you can use it as a checklist for yourself each time you commit code.  But you might also see whether you can conceive of some sort of automation.  Or maybe you just resolve to revisit the codebase periodically, with a critical eye toward these sorts of questions.

You need to see if you can replace the human insights of a peer.  Admittedly, this presents a serious challenge.  But get creative and see what you can come up with.

Adjust Your Approach

The final plank I'll mention involves changing the way you approach development and review in general.  For whatever reason, human review of your work has become a scarce resource.  You need to adjust accordingly.

Picking up a good bit of automated review makes up part of this adjustment, as does creating of a checklist to apply to yourself.  But you need to go further as well.  Take an approach wherein you look to become more self-sufficient for any of the littler things and store up your scarce access to human reviewers for the truly weighty architectural decisions.  When these come up, enlist the help of someone else in your organization or even the internet.

On top of that, look opportunistically for ways to catch your own mistakes and improve.  Everyone has to learn from their mistakes, but with less margin for error, you need to learn from them and automate their prevention going forward.  Again, automated review helps here, but you'll need to get creative.

Having peer review yanked out from under you undeniably presents a challenge.  Luckily, however, you have more tools than ever at your disposal to pick up the slack.  Make use of them.  When you find yourself in a situation with the peer review safety net restored, you'll be an even better programmer for it.

Tools at your disposal

SubMain offers CodeIt.Right that easily integrates into Visual Studio for flexible and intuitive automated code review solution that works real-time, on demand, at the source control check-in or as part of your build.

Related resources

Learn more how CodeIt.Right can help you automate code reviews and improve your code quality.

About the Author

Erik Dietrich

I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

posted on Wednesday, 05 April 2017 07:17:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
 Tuesday, 28 March 2017

"You never concatenate strings.  Instead, always use a StringBuilder."

I feel pretty confident that any C# developer that has ever worked in a group has heard this admonition at least once.  This represents one of those bits of developer wisdom that the world expects you to just memorize.  Over the course of your career, these add up.  And once they do, grizzled veterans engage in a sort of comparative jousting for rank.  The internet encourages them and eggs them on.

"How can you call yourself a senior C# developer and not know how to serialize objects to XML?!"

With two evenly matched veterans swinging language swords at one another, this volley may continue for a while.  Eventually, though, one falters and pecking order is established.

Static Analyzers to the Rescue

I must confess.  I tend to do horribly at this sort of thing.  Despite having relatively good memory retention ability in theory, I have a critical Achilles Heel in this regard.  Specifically, I can only retain information that interests me.  And building up a massive arsenal of programming language "how-could-yous" for dueling purposes just doesn't interest me.  It doesn't solve any problem that I have.

And, really, why should it?  Early in my career, I figured out the joy of static analyzers in pretty short order.  Just as the ubiquity of search engines means I don't need to memorize algorithms, the presence of static analyzers saves me from cognitively carrying around giant checklists of programming sins to avoid.  I rejoiced in this discovery.  Suddenly, I could solve interesting problems and trust the equivalent of programmer spell check to take care of the boring stuff.

Oh, don't get me wrong.  After the analyzers slapped me, I internalized the lessons.  But I never bothered to go out of my way to do so.  I learned only in response to an actual, immediate problem.  "I don't like seeing warnings, so let me figure out the issue and subsequently avoid it."

My Coding Provincialism

This general modus operandi caused me to respond predictably when I first encountered the idea of globalization in language.  "Wait, so this helps when?  If someone theoretically deploys code to some other country?  And, then, they might see dates printed in a way that seems strange to them?  Huh."

For many years, this solved no actual problem that I had.  Early in my career, I wrote software that people deployed in the US.  Much of it had no connectivity functionality.  Heck, a lot of it didn't even have a user interface.  Worst case, I might later have to realize that some log file's time stamps happened in Mountain Time or something.

Globalization solved no problem that I had.  So when I heard rumblings about the "best practice," I generally paid no heed.  And, truth be told, nobody suffered.  With the software I wrote for many years, this would have constituted a premature optimization.

But it nevertheless instilled in me a provincialism regarding code.

A Dose of Reality

I've spent my career as a polyglot.  And so at one point, I switched jobs, and it took me from writing Java-based web apps to a desktop app using C# and WPF.  This WPF app happened to have worldwide distribution.  And, when I say worldwide, I mean just about every country in the world.

Suddenly, globalization went from "premature optimization" to "development table stakes."  And the learning curve became steep.  We didn't just need to account for the fact that people might want to see dates where the day, rather than the month, came first.  The GUI needed translation into dozens of languages as a menu setting.  This included languages with text read from right to left.

How did I deal with this?  At the time, I don't recall having the benefit of a static analyzer that helped in this regard.  FXCop may have provided some relief, but I don't recall one way or the other.  Instead, I found myself needing to study and laboriously create mental checklists.  This "best practice" knowledge hoarding suddenly solved an immediate problem.  So, I did it.

CodeIt.Right's Globalization Features

Years have passed since then.  I've had several jobs since then, and, as a solo consultant, I've had dozens of clients and gigs.  I've lost my once encyclopedic knowledge of globalization concerns.  That happened because -- you guessed it -- it no longer solves an immediate problem that I have.

Oh, I'd probably do better with it now than I did in the past.  But I'd still have to re-familiarize myself with the particulars and study up once again in order to get it right, should  the need arise.  Except, these days, I could enlist some help.  CodeIt.Right, installed on my machine, will give me the heads up I didn't have those years ago.  It has a number of globalization concerns built right in.  Specifically, it will remind you about the following concerns.  I'll just list them here, saving detailed explanations for a future "CodeIt.Right Rules, Explained" post.

  • Specify culture info
  • Specify string comparison (for culture)
  • Do not pass literals as localized parameters
  • Normalize strings to uppercase
  • Do not hard code locale specific strings
  • Use ordinal string comparison
  • Specify marshaling for PInvoke string arguments
  • Set locale for data types

That provides an excellent head start on getting savvy with globalization.

The Takeaway

Throughout the post, I've talked about my tendency not to bother with things that don't solve immediate problems for me.  I realize philosophical differences in approach exist, but I stand by this practice to this day.  And I don't say this only because of time savings and avoiding premature optimization.  Storing up an arsenal of specific "best practices" in your head threatens to entrench you in your ways and to establish an approach of "that's just how you do it."

And yet, not doing this can lead to making rookie mistakes and later repeating them.  But, for me, that's where automated tooling enters the picture.  I understand the globalization problem in theory.  That I have not forgotten.  And I can use a tool like CodeIt.Right to bridge the gap between theory and specifics in short order, creating just-in-time solutions to problems that I have.

So to conclude the post, I would offer the following in takeaway.  Stop memorizing all of the little things you need to check for at the method level in coding. Let tooling do that for you, so that you can keep big picture ideas in your head.  I'd say, "don't lose sight of the forest for the trees," but with tooling, you can see the forest and the trees.

Learn more how CodeIt.Right can help you automate code reviews, improve your code quality, and ensure your code is globalization ready.

About the Author

Erik Dietrich

I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

posted on Tuesday, 28 March 2017 07:10:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
 Tuesday, 21 March 2017

Today, I'd like to offer a somewhat lighthearted treatment to a serious topic.  I generally find that this tends to offer catharsis to the frustrated.  And the topic of code review tends to lead to lots of frustration.

When talking about code review, I always make sure to offer a specific distinction.  We can divide code reviews into two mutually exclusive buckets: automated and manual.  At first, this distinction might sound strange.  Most readers probably think of code reviews as activities with exclusively human actors.  But I tend to disagree.  Any static analyzer (including the compiler) offers feedback.  And some tools, like CodeIt.Right, specifically regard their suggestions and automated fixes as an automation of the code review process.

I would argue that automated code review should definitely factor into your code review strategy.  It takes the simple things out of the equation and lets the humans involved focus on more complex, nuanced topics.  That said, I want to ignore the idea of automated review for the rest of the post.  Instead, I'll talk exclusively about manual code reviews and, more specifically, where they tend to get ugly.

You should absolutely do manual code reviews.  Full stop.  But you should also know that they can easily go wrong and devolved into useless or even toxic activities.  To make them effective, you need to exercise vigilance with them.  And, toward that end, I'll talk about some manual code review anti-patterns.

The Gauntlet

First up, let's talk about a style of review that probably inspires the most disgust among former participants.  Here, I'm talking about what I call "the gauntlet."

In this style of code review, the person submitting for review comes to a room with a number of self-important, hyper-critical peers.  Of course, they might not view themselves as peers.  Instead, they probably imagine themselves as a panel of judges for some reality show.

From this 'lofty' perch, they attack the reviewee's code with a malevolent glee.  They adopt a derisive tone and administer the third degree.  And, frankly, they crush the spirit of anyone subject to this process, leaving low morale and resentment in their wake.

The Marathon

Next, consider a less awful, but not effective style of code review.  This one I call "the marathon."  I bet you can predict what I mean by this.

In the marathon code review, the participants sit in some conference room for hours.  It starts out as an enthusiastic enough affair, but as time passes, people's energy wanes.  Nevertheless, it goes on because of an edict that all code needs review and because everyone waited until the 11th hour.  And predictably, things get less careless as time goes on and people space out.

Marathon code reviews eventually reach a point where you might as well not bother.

The Scattershot Review

Scattershot reviews tend to occur in organizations without much rigor around the code review process.  Perhaps their process does not officially formally include code review.  Or, maybe, it offers on more specifics than "do it."

With a scattershot review process, the reviewer demonstrates no consistency or predictability in the evaluation.  One day he might suggest eliminating global variables, and on another day, he might advocate for them.  Or, perhaps the variance occurs depending on reviewer.  Whatever the specifics, you can rest assured you'll never receive the same feedback twice.

This approach to code review can cause some annoyance and resentment.  But morale issues typically take a backseat to simple ineffectiveness and churn in approach.

The Exam

Some of these can certainly coincide.  In fact, some of them will likely coincide.  So it goes with "the exam" and "the gauntlet."  But while the gauntlet focuses mostly on the process of the review, the exam focuses on the outcome.

Exam code reviews occur when the parlance around what happens at the end involves "pass or fail."  If you hear people talking about "failing" a code review, you have an exam on your hands.

Code review should involve a second set of eyes on something to improve it.  For instance, imagine that you wrote a presentation or a whitepaper.  You might ask someone to look it over and proofread it to help you improve it.  If they found a typo, they wouldn't proclaim that you had "failed."  They'd just offer the feedback.

Treating code reviews as exams generally hurts morale and causes the team to lose out on the collaborative dynamic.

The Soliloquy

The review style I call "the soliloquy" involves paying lip service to the entire process.  In literature, characters offer soliloquies when they speak their thoughts aloud regardless of anyone hearing them.  So it goes with code review styles as well.

To understand what I mean, think of times in the past where you've emailed someone and asked them to look at a commit.  Five minutes later, they send back a quick, "looks good."  Did they really review it?  Really?  You have a soliloquy when you find yourself coding into the vacuum like this.

The downside here should be obvious.  If people spare time for only a cursory glance, you aren't really conducting code reviews.

The Alpha Dog

Again, you might find an "alpha dog" in some of these other sorts of reviews.  I'm looking at you, gauntlet and exam.  With an alpha dog code review, you have a situation where a particularly dominant senior developer rules the roost with the team.  In that sense, the title refers both to the person and to the style of review.

In a team with a clear alpha dog, that person rules the codebase with an iron fist.  Thus the code review becomes an exercise in appeasing the alpha dog.  If he is present, this just results in him administering a gauntlet.  But, even absent, the review goes according to what he may or may not like.

This tends to lead team members to a condition known as "learned helplessness," wherein they cease bothering to make decisions without the alpha dog.  Obviously, this stunts their career development, but it also has a pragmatic toll for the team in the short term.  This scales terribly.

The Weeds

Last up, I'll offer a review issue that I'll call "the weeds."  This can happen in the most well meaning of situations, particularly with folks that love their craft.  Simply put, they get "into the weeds."

What I mean with this colloquialism is that they bogged down in details at the expense of the bigger picture.  Obviously, an exacting alpha dog can drag things into the weeds, but so can anyone.  They might wind up with a lengthy digression about some arcane language point, of interest to all parties, but not critical to shipping software.  And typically, this happens with things that you ought to make matters of procedures, or even to address with your automated code reviews.

The biggest issue with a "weeds" code review arises from the poor use of time.  It causes things to get skipped, or else it turns reviews into marathons.

Getting it Right

How to get code reviews right could easily occupy multiple posts.  But I'll close by giving a very broad philosophical outlook on how to approach it.

First of all, make sure that you get clarity up front around code review goals, criteria, and conduct.  This helps to stop anti-patterns wherein the review gets off track or bogged down.  It also prevents soliloquies and somewhat mutes bad behavior.  But, beyond that, look at code reviews as collaborative, voluntary sessions where peers try to improve the general codebase.  Some of those peers may have more or less experience, but everyone's opinion matters, and it's just that -- an opinion for the author to take under advisement.

While you might cringe at the notion that someone less experienced might leave something bad in the codebase, trust me.  The damage you do by allowing these anti-patterns to continue in the name of "getting it right" will be far worse.

Learn more how CodeIt.Right can help you automate code reviews and improve your code quality.

About the Author

Erik Dietrich

I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

posted on Tuesday, 21 March 2017 06:11:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
 Tuesday, 14 March 2017

Today, I'll do another installment of the CodeIt.Right Rules, Explained series.  I have now made four such posts in this series.  And, as always, I'll start off by citing my two personal rules about static analysis guidance.

  • Never implement a suggested fix without knowing what makes it a fix.
  • Never ignore a suggested fix without understanding what makes it a fix.

It may seem as though I'm playing rhetorical games here.  After all, I could simply say, "learn the reasoning behind all suggested fixes."  But I want to underscore the decision you face when confronted with static analysis feedback.  In all cases, you must actively choose to ignore the feedback or address it.  And for both options, you need to understand the logic behind the suggestion.

In that spirit, I'm going to offer up explanations for three more CodeIt.Right rules today.

Type that contains only static members should be sealed

Let's start here with a quick example.  I think this picture will suffice for some number of words, if not necessarily one thousand.

blog-codeitright-rules-part4-1

Here, I've laid a tiny seed for a Swiss Army Knife, "utils" class.  Presumably, I will continue to dump any method I think might help me with Linq into this class.  But for now, it contains only a single method to make things easy to understand.  (As an aside, I discourage "utils" classes as a practice.  I'm using this example because everyone reading has most assuredly seen one of these things at some point.)

When you run CodeIt.Right analysis on this code, you will find yourself confronted with a design issue.  Specifically, "types that contain only static members should be sealed."

You probably won't have a hard time discerning how to remedy the situation.  Adding the "sealed" modifier to the class will do the trick.  But why does CodeIt.Right object?

The Microsoft guidelines contain a bit more information.  They briefly explain that static analyzers make an inference about your design intent, and that you can better communicate that intent by using the "sealed" keyword.  But let's unpack that a bit.

When you write a class that has nothing but static members, such as a static utils class, you create something with no instantiation logic and no state.  In other words, you could instantiate "a LinqUtils," but you couldn't do anything with it.  Presumably, you do not intend that people use the class in that way.

But what about other ways of interacting with the class, such as via inheritance?  Again, you could create a LinqUtilsChild that inherited from LinqUtils, but to what end?  Polymorphism requires instance members, and non exist here.  The inheriting class would inherit absolutely nothing from its parent, making the inheritance awkward at best.

Thus the intent of the rule.  You can think of it telling you the following.  "You're obviously not planning to let people use inheritance with you, so don't even leave that door open for them to possibly make a mistake."

So when you find yourself confronted with this warning, you have a simple bit of consideration.  Do you intend to have instance behavior?  If so, add that behavior and the warning goes away.  If not, simply mark the class sealed.

Async methods should have async suffix

Next up, let's consider a rule in the naming category.  Specifically, when you name an async method with suffixing "async" on its name, you see the warning.  Microsoft declares this succinctly in their guidelines.

By convention, you append "Async" to the names of methods that have an async modifier.

So, CodeIt.Right simply tells us that we've run afoul of this convention.  But, again, let's dive into the reasoning behind this rule.

When Microsoft introduced this programming paradigm, they did so in a non-breaking release.  This caused something of a conundrum for them because of a perfectly understandable language rule stating that method overloads cannot vary only by a return type.  To take advantage of the new language feature, users would need to offer the new, async methods, and also backward compatibility with existing method calls.  This put them in the position of needing to give the new, async methods different names.  And so Microsoft offered guidance on a convention for doing so.

I'd like to make a call-out here with regard to my two rules at the top of each post.  This convention came about because of expediency and now sticks around for convention's sake.  But it may bother you that you're asked to bake a keyword into the name of a method.  This might trouble you in the same way that a method called "GetCustomerNumberString()" might bother you.  In other words, while I don't advise you go against convention, I will say that not all warnings are created equally.

Always define a global error handler

With this particular advice, we dive into warnings specific to ASP.  When you see this warning, it concerns the Global.asax file.  To understand a bit more about that, you can read this Stack Overflow question.  In short, Global.asax allows you to define responses to "system level" in a single place.

CodeIt.Right is telling you to define just such an event -- specifically one in response to the "Application_Error" event.  This event occurs whenever an exception bubbles all the way up without being trapped anywhere by your code somewhere.  And, that's a perfectly reasonable state of affairs -- your code won't trap every possible exception.

CodeIt.Right wants you to define a default behavior on application errors.  This could mean something as simple as redirecting to a page that says, "oops, sorry about that."  Or, it could entail all sorts of robust, diagnostic information.  The important thing is that you define it and that it be consistent.  You certainly don't want to learn from your users what your own application does in response to an error.

So spent a bit of time defining your global error handling behavior.  By all means, trap and handle exceptions as close to the source as you can.  But always make sure to have a backup plan.

Until Next Time

In this post, I ran the gamut across concerns.  I touched on an object-oriented design concern.  Then, I went into a naming consideration involving async, and, finally, I talked specifically about ASP programming considerations.

I don't have a particular algorithm for the order in which I cover these subjects.  But, I like the way this shook out.  It goes to show you that CodeIt.Right covers a lot of ground, across a lot of different landscapes of the .NET world.

Learn more how CodeIt.Right can help you automate code reviews and improve your code quality.

About the Author

Erik Dietrich

I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

posted on Tuesday, 14 March 2017 07:09:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
 Tuesday, 07 March 2017

I have long since cast my lot with the software industry.  But, if I were going to make a commercial to convince others to follow suit, I can imagine what it would look like.  I'd probably feature cool-looking, clear whiteboards, engaged people, and frenetic design of the future.  And a robot or two.  Come help us build the technology of tomorrow.

Of course, you might later accuse me of bait and switch.  You entered a bootcamp, ready to build the technology of tomorrow.  Three years later, you found yourself on safari in a legacy code jungle, trying to wrangle some SharePoint plugin.  Erik, you lied to me.

So, let me inoculate myself against that particular accusation.  With a career in software, you will certainly get to work on some cool things.  But you will also find yourself doing the decidedly less glamorous task of software maintenance.  You may as well prepare yourself for that now.

The Conceptual Difference: Build vs Maintain

From the software developer's perspective, this distinction might evoke various contrasts.  Fun versus boring.  Satisfying versus annoying.  New problem versus solved problem.  My stuff versus that of some guy named Steve that apparently worked here 8 years ago.  You get the idea.

But let's zoom out a bit.  For a broader perspective, consider the difference as it pertains to a business.

blog-automation-software-maintenance-1Build mode (green field) means a push toward new capability.  Usually, the business will regard construction of this capability as a project with a calculated return on investment (ROI).  To put it more plainly, "we're going to spend $500,000 building this thing that we expect to make/save us $1.5 million by next year."

Maintenance mode, on the other hand, presents the business with a cost center.  They've now made their investment and (at least partially) realized return on it.  The maintenance team just hangs around to prevent backslides.  For instance, should maintenance problems crop up, you may lose customers or efficiency.

Plan of Attack: Build vs Maintain

Because the business regards these activities differently, it will attack them differently.  And, while I can't speak to every conceivable situation, my consulting work has shown me wide variety.  So I can speak to general trends.

In green field mode, the business tends to regard the work as an investment.  So, while management might dislike overruns and unexpected costs, they will tend to tolerate them more.  Commonly, you see a "this will pay off later" mentality.

On the maintenance side of things, you tend to see far less forgiveness.  Certainly, all parties forgive unexpected problems a lot less easily.  They view all of it as a burden.

This difference in attitude translates to the planning as well.  Green field projects justifiably command full time people for the duration of the project.  Maintenance mode tends to get you familiar with the curious term "half of a person."  By this, I mean you hear things like "we're done with the Sigma project, but someone needs to keep the lights on.  That'll be half of Alice."  The business grudgingly allocates part time duty to maintenance tasks.

Why?  Well, maintenance tends to arise out of reactive scenarios.

Reactive Mode and the Value of Automation

Maintenance mode in software will have some planned activities, particularly if it needs scheduled maintenance.  But most maintenance programmers find themselves in a reactive, "wait and see" kind of situation.  They have little to do on the project in question until an outage happens, someone discovers a bug, or a customer requests a new feature.  Then, they spring into action.

Business folks tend to hate this sort of situation.  After all, you need to plan for this stuff, but you might have someone sitting around doing nothing.  It is from this fundamental conundrum that "half people" and "quarter people" arise.  Maintenance programmers usually have other stuff to juggle along with maintaining "Sigma."

You should automate this stuff during green field time, when management is willing to invest. If you tell them it means less maintenance cost, they'll probably bite.

Because of this double duty, the business doubles down on pressure to minimize maintenance.  After all, not only does it create cost, but it takes the people away from other, profit-driven things that they could otherwise do.

So how do we, as programmers, and we, as software shops, best deal with this?  We make maintenance as turnkey as possible by automating as much as possible.  Oh, and you should automate this stuff during green field time, when management is willing to invest.  If you tell them it means less maintenance cost, they'll probably bite.

Automate the Test Suite

First up for automation candidates, think of the codebase's test suite.  Hopefully, you've followed my advice and built this during green field mode.  But, if not, it's never too late to start.

Think of how time consuming a job QA has.  If manually running the software and conducting experiments constitutes the entirety of your test strategy, you'll find yourself hosed at maintenance time.  With "half a person" allocated, no one has time for that.  Without an automated suite, then, testing falls by the wayside, making your changes to a production system even more risky.

You need to automate a robust test suite that lets you know if you have broken anything.  This becomes even more critical when you consider that most maintenance programmers haven't touched the code they modify in a long time, if ever.

Automate Code Reviews

If I were to pick a one-two punch for code quality, that would involve unit tests and code review.  Therefore, just as you should automate your test suite, you should automate your code review as well.

If you think testing goes by the wayside in an under-staffed, cost-center model, you can forget about peer review altogether.  During the course of my travels, I've rarely seen code review continue into maintenance mode, except in regulated industries.

Automated code review tools exist, and they don't require even "half a person."  An automated code review tool serves its role without consuming bandwidth.  And, it provides maintenance programmers operating in high risk scenarios with a modicum of comfort and safety net.

Automate Production Monitoring

For my last maintenance mode automation tip of the post, I'll suggest that you automate production monitoring capabilities.  This covers a fair bit of ground, so I'll generalize by saying these include anything that keeps your finger on the pulse of your system's production behavior.

You have logging, no doubt, but do you monitor the logs?  Do you keep track of system outages and system load?  If you roll software to production, do you have a system of checks in place to know if something smells fishy?

You want to make the answer to these questions, "yes."  And you want to make the answer "yes" without you needing to go in and manually check.  Automate various means of monitoring your production software and providing yourself with alerts.  This will reduced maintenance costs across the board.

Automate Anything You Can

I've listed some automation examples that come to mind as the most critical, based on my experience.  But, really, you should automate anything around the maintenance process that you can.

Now, you might think to yourself, "we're programmers, we should automate everything."  Well, that subject could make for a whole post in and of itself, but I'll speak briefly to the distinction.  Build mode usually involves creating something from nothing on a large scale.  While you can automate the scaffolding around this activity, you'll struggle to automate a significant amount of the process.

But that ratio gets much better during maintenance time.  So the cost center nature of maintenance, combined with the higher possible automation percentage, makes it a rich target.  Indeed, I would argue that strategic automation defines the art of maintenance.

Tools at your disposal

SubMain offers CodeIt.Right that easily integrates into Visual Studio for flexible and intuitive automated code review solution that works real-time, on demand, at the source control check-in or as part of your build.

Related resources

Learn more how CodeIt.Right can help you automate code reviews and improve your code quality.

About the Author

Erik Dietrich

I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

posted on Tuesday, 07 March 2017 09:04:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
 Tuesday, 21 February 2017

In what has become a series of posts, I have been explaining some CodeIt.Right rules in depth.  As with the last post in the series, I'll start off by citing two rules that I, personally, follow when it comes to static code analysis.

  • Never implement a suggested fix without knowing what makes it a fix.
  • Never ignore a suggested fix without understanding what makes it a fix.

It may seem as though I'm playing rhetorical games here.  After all, I could simply say, "learn the reasoning behind all suggested fixes."  But I want to underscore the decision you face when confronted with static analysis feedback.  In all cases, you must actively choose to ignore the feedback or address it.  And for both options, you need to understand the logic behind the suggestion.

In that spirit, I'm going to offer up explanations for three more CodeIt.Right rules today.

Use Constants Where Appropriate

First up, let's consider the admonition to "use constants where appropriate."  Consider this code that I lifted from a Github project I worked on once.

blog-codeitright-rules-part3-1

I received this warning on the first two lines of code for this class.  Specifically, CodeIt.Right objects to my usage of static readonly string. If I let CodeIt.Right fix the issue for me, I wind up with the following code.

blog-codeitright-rules-part3-2

Now, CodeIt.Right seems happy.  So, what gives?  Why does this matter?

I'll offer you the release notes of the version where CodeIt.Right introduced this rule.  If you look at the parenthetical next to the rule, you will see "performance."  This preference has something to do with code performance.  So, let's get specific.

When you declare a variable using const or static readonly, think in terms of magic values and their elimination.  For instance, imagine my UserAgentKey value.  Why do you think I declare that the way I did?  I did it to name that string, rather than using it inline as a "magic" string. 

As a maintenance programmer, how frustrating do you find stumbling across lines of code like, "if(x == 299)"?  "What is 299, and why do we care?!"

So you introduce a variable (or, preferably, a constant) to document your intent.  In the made-up hypothetical, you might then have "if(x == MaximumCountBeforeRetry)".  Now you can easily understand what the value means.

Either way of declaring this (constant or static, readonly field) serves the replacement purpose.  In both cases, I replace a magic value with a more readable, named one.  But in the case of static readonly, I replace it with a variable, and in the case of const, I replace it with, well, a const.

From a performance perspective, this matters.  You can think of a declaration of const as simply hard-coding a value, but without the magic.  So, when I switch to const, in my declaration, the compiler replaces every version of UserAgentKey with the string literal "user-agent".  After compilation, you can't tell whether I used a const or just hard-coded it everywhere.

But with a static readonly declaration, it remains a variable, even when you use it like a constant.  It thus incurs the relative overhead penalty of performing a variable lookup at runtime.  For this reason, CodeIt.Right steers you toward considering making this a constant.

Parameter Names Should Match Base Declaration

For the next rule, let's return to the Github scraper project from the last example.  I'll show you two snippets of code.  The first comes from an interface definition and the second from a class implementing that interface.  Pay specific attention to the method, GetRepoSearchResults.

blog-codeitright-rules-part3-3

blog-codeitright-rules-part3-4

If you take a look at the parameter names, it probably won't surprise you to see that they do not match.  Therein lies the problem that CodeIt.Right has with my code.  It wants the implementing class to match the interface definition (i.e. the "base").  But why?

In this case, we have a fairly simple answer.  Having different names for the conceptually same method creates confusion. 

Specifically, maintainers will struggle to understand whether you meant to override or overload the method.  In our mind's eyes, identical method signatures signals polymorphic approaches, while same name, different parameters signals overload.  In a sense, changing the name of a variable fakes maintenance programmers out.

Do Not Declare Externally Visible Instance Fields

I don't believe we need a screenshot for this one.  Consider the following trivial code snippet.

public class SomeClass
{
    public string _someVariable;
}

This warning says, "don't do that."  More specifically, don't declare an instance field with external (to the type) visibility.  The question is, "why not?"

If you check out the Microsoft guidance on the subject, they explain that, the "use of a field should be as an implementation detail."  In other words, they contend that you violate encapsulation by exposing fields.  Instead, they say, you should expose this via a property (which simply offers syntactic sugar over a method).

Instead of continuing with abstract concepts, I'll offer a concrete example.  Imagine that you want to model a family and you declare an integer field called _numberOfChildren. That works fine initially, but eventually you encounter the conceptually weird edge case where someone tries to define a family with -1 children.  With an integer field, you can technically do this, but you want to prevent that from happening.

With clients of your class directly accessing and setting this field, you wind up having to go install this guard logic literally everywhere your clients interact with the field.  But had you hidden the field behind a property, you could simply add logic to the property setter wherein you throw an exception on an attempt to set a negative value.

This rule attempts to help you future-proof your code and follow good OO practice.

Until Next Time

Somewhat by coincidence, this post focused heavily on the C# flavor of object-oriented programming.  We looked at constants versus field access, but then focused on polymorphism and encapsulation.

I mention this because I find it interesting to see where static analyzers take you.  Follow along for the rest of the series and, hopefully, you'll learn various useful nuggets about the language you use.

Learn more how CodeIt.Right can help you automate code reviews and improve your code quality.

About the Author

Erik Dietrich

I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

posted on Tuesday, 21 February 2017 09:55:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
 Monday, 30 January 2017

For years, I can remember fighting the good fight for unit testing.  When I started that fight, I understood a simple premise.  We, as programmers, automate things.  So, why not automate testing?

Of all things, a grad school course in software engineering introduced me to the concept back in 2005.  It hooked me immediately, and I began applying the lessons to my work at the time.  A few years and a new job later, I came to a group that had not yet discovered the wonders of automated testing.  No worries, I figured, I can introduce the concept!

Except, it turns out that people stuck in their ways kind of like those ways.  Imagine my surprise to discover that people turned up their nose at the practice.  Over the course of time, I learned to plead my case, both in technical and business terms.  But it often felt like wading upstream against a fast moving current.

Years later, I have fought that fight over and over again.  In fact, I've produced training materials, courses, videos, blog posts, and books on the subject.  I've brought people around to see the benefits and then subsequently realize those benefits following adoption.  This has brought me satisfaction.

But I don't do this in a vacuum.  The industry as a whole has followed the same trajectory, using the same logic.  I count myself just another advocate among a euphony of voices.  And so our profession has generally come to accept unit testing as a vital tool.

Widespread Acceptance of Automated Regression Tests

In fact, I might go so far as to call acceptance and adoption quite widespread.  This figure only increases if you include shops that totally mean to and will definitely get around to it like sometime in the next six months or something.  In other words, if you count both shops that have adopted the practice and shops that feel as though they should, acceptance figures certainly span a plurality.

Major enterprises bring me in to help them teach their developers to do it.  Still, other companies consult and ask questions about it.  Just about everyone wants to understand how to realize the unit testing value proposition of higher quality, more stability, and fewer bugs.

This takes a simple form.  We talk about unit testing and other forms of testing, and sometimes this may blur the lines.  But let's get specific here.  A holistic testing strategy includes tests at a variety of granularities.  These comprise what some call "the test pyramid."  Unit tests address individual components (e.g. classes), while service tests drive at the way the components of your application work together.  GUI tests, the least granular of all, exercise the whole thing.

Taken together, these comprise your regression test suite.  It stands against the category of bugs known as "regressions," or defects where something that used to work stops working.  For a parallel example in the "real world" think of the warning lights on your car's dashboard.  "Low battery" light comes on because the battery, which used to work, has stopped working.

Benefits of Automated Regression Test Suites

Why do this?  What benefits to automated regression test suites provide?  Well, let's take a look at some.

  • Repeatability and accuracy.  A human running tests over and over again may produce slight variances in the tests.  A machine, not so much.
  • Speed.  As with anything, automation produces a significant speedup over manual execution.
  • Fast feedback.  The automated test suite can tell you much more quickly if you have broken something.
  • Morale.  The fewer times a QA department comes back with "you broke this thing," the fewer opportunities for contentiousness.

I should also mention, as a brief aside, that I don't consider automated test suites to be acceptable substitutes for manual testing.  Rather, I believe the two efforts should work in complementary fashion.  If the automated test suite executes the humdrum tests in the codebase, it frees QA folks up to perform intelligent, exploratory testing.  As Uncle Bob once famously said, "it's wrong to turn humans into machines.  If you can write a script for a test procedure, then you can write a program to execute that procedure."

Automating Code Review

None of this probably comes as much of a shock to you.  If you go out and read tech blogs, you've no doubt encountered the widespread opinion that people should automate regression test suites.  In fact, you probably share that opinion.  So don't you wonder why we don't more frequently apply that logic to other concerns?

Take code review, for instance.  Most organizations do this in entirely manual fashion outside of, perhaps, a so-called "linting" tool.  They mandate automated test coverage and then content themselves with sicking their developers on one another in meetings to gripe over tabs, spaces, and camel casing.

Why not approach code review the same way?  Why not automate the aspects of it that lend themselves to automation, while saving human intervention for more conceptual matters?

Benefits of Automated Code Reviews

In a study by Steve McConnell and referenced in this blog post, "formal code inspections" produced better results for preemptively finding bugs than even automated regression tests.  So it stands to reason that we should invest in code review in the same ways that we invest in regression testing.  And I don't mean simply time spent, but in driving forward with automation and efficiency.

Consider the benefits I listed above for automated tests, and look how they apply to automated code review.

  • Repeatability and accuracy.  Humans will miss instances of substandard code if they feel tired -- machines won't.
  • Speed.  Do you want your code review to take seconds or in hours/days.
  • Fast feedback.  Because of the increased speed of the review, the reviewee gets the results immediately after writing the code, for better learning.
  • Morale.  The exact same reasoning applies here.  Having a machine point out your mistakes can save contentiousness.

I think that we'll see a similar trajectory to automating code review that we did with automating test suites.  And, what's more, I think that automated code review will gain steam a lot more quickly and with less resistance.  After all, automating QA activities blazed a trail.

I believe the biggest barrier to adoption, in this case, is the lack of awareness.  People may not believe automating code review is possible.  But I assure you, you can do it.  So keep an eye out for ways to automate this important practice, and get in ahead of the adoption curve.

Related resources

Tools at your disposal

SubMain offers CodeIt.Right that easily integrates into Visual Studio for flexible and intuitive automated code review solution that works real-time, on demand, at the source control check-in or as part of your build.

Learn more how CodeIt.Right can help you automate code reviews and improve your code quality.

About the Author

Erik Dietrich

I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

posted on Monday, 30 January 2017 15:52:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
 Monday, 23 January 2017

As a teenager, I remember having a passing interest in hacking.  Perhaps this came from watching the movie Sneakers.  Whatever the origin, the fancy passed quickly because I prefer building stuff to breaking other people's stuff.  Therefore, what I know about hacking pretty much stops at understanding terminology and high level concepts.

Consider the term "zero day exploit," for instance.  While I understand what this means, I have never once, in my life, sat on discovery of a software vulnerability for the purpose of using it somehow.  Usually when I discover a bug, I'm trying to deposit a check or something, and I care only about the inconvenience.  But I still understand the term.

"Zero day" refers to the amount of time the software vendor has to prepare for the vulnerability.  You see, the clever hacker gives no warning about the vulnerability before using it.  (This seems like common sense, though perhaps hackers with more derring do like to give them half a day to watch them scramble to release something before the hack takes effect.)  The time between announcement and reality is zero.

Increased Deployment Cadence

Let's co-opt the term "zero day" for a different purpose.  Imagine that we now use it to refer to software deployments.  By "zero day deployment," we thus mean "software deployed without any prior announcement."

blog-are-you-ready-for-zero-day-software-deploymentBut why would anyone do this?  Don't you miss out on some great marketing opportunities?  And, more importantly, can you even release software this quickly?  Understanding comes from realizing that software deployment is undergoing a radical shift.

To understand this think about software release cadences 20 years ago.  In the 90s, Internet Explorer won the first browser war because it managed to beat Netscape's plodding release of going 3 years between releases.  With major software products, release cadences of a year or two dominated the landscape back then.

But that timeline has shrunk steadily.  For a highly visible example, consider Visual Studio.  In 2002, 2005, 2008, Microsoft released versions corresponding to those years.  Then it started to shrink with 2010, 2012, and 2013.  Now, the years no longer mark releases, per se, with Microsoft actually releasing major updates on a quarterly basis.

Zero Day Deployments

As much as going from "every 3 years" to "every 3 months" impresses, websites and SaaS vendors have shrunk it to "every day."  Consider Facebook's deployment cadence.  They roll minor updates every business day and major ones every week.

With this cadence, we truly reach zero day deployment.  You never hear Facebook announcing major upcoming releases.  In fact, you never hear Facebook announcing releases, period.  The first the world sees of a given Facebook release is when the release actually happens.  Truly, this means zero day releases.

Oh, don't get me wrong.  Rumors of upcoming features and capabilities circulate, and Facebook certainly has a robust marketing department.  But Facebook and companies with similar deployment approaches have impressively made deployments a non-event.  And others are looking to follow suit, perhaps yours included.

Conceptual Impediments to Zero Day Deployments

If what I just said made you spit your drink at the screen, I understand.  Perhaps your deployment and release process takes so long that the thought of shrinking it to a day made you laugh.  Or perhaps it terrified.  Either way, I can understand that it may seem quite a leap.

You may conceive of Facebook and other practitioners so alien to your own situation that you see no path from here to there.  But in reality, they almost certainly do the same things you do as part of your longer process -- just optimized and automated.

Impediments take a variety of forms.  You might have lengthy quality assurance and vetting processes, perhaps that require many iterations between the developers and quality assurance.  You might still be packaging software onto DVDs and shipping it to customers.  Perhaps you run all sorts of checks and analytics on it.  But all will fall under the general heading of requiring manual intervention or consuming a lot of time.

To get to zero day deployments, you need to automate and speed up considerably, and this can seem daunting.

What's Common Today

Some good news exists, though.  The same forces that let the Visual Studio team see such radical improvement push on software shops across the board.  We all have access to helpful techs.

For instance, the overwhelming majority of organizations now have continuous integration via dedicated build machines.  Software developers commit code, and these things scoop it up, compile it, and package it up in a deployable package.  This activity now happens on the order of minutes whereas, in the past, I can remember shops where this was some poor guy's entire job, and he'd spend days on each build.

And, speaking of the CI server, a lot of them run automated test suites as part of what they do.  Most commonly, this means unit tests.  But they might also invoke acceptance tests and even more exotic things like smoke, GUI, and functionality tests.  You can thus accept commits, build the software, run a bunch of test, and get it ready to deploy.

Of course, you can also automate the actual deployment as well.  It stands to reason that, if your build machine can ball it up into a deliverable, it can deliver that deliverable.  This might be harder with physical media involved, but as more software deliveries happen over networks, more of them get automated.

What We Need Next

With all of that in place, why don't we have more zero day deployments?  What's missing?

Again, discounting the problem of physical media, I'd say quality checks present the biggest issue.  We can compile, run automated tests, and deploy automatically.  But does this guarantee acceptable production behavior?

What about the important element of code reviews?  How do you assure that, even as automated tests pass, the application isn't piling up mountains of technical debt and impeding future deployments?  To get to zero day deployments, we must address these issues.

Don't get me wrong.  Other things matter here as well.  Zero day deployments require robust production checks and sophisticated "oops, that didn't work, rollback!" capabilities.  But I think that nothing will matter more than automated quality checks.

Each time you commit code, you need an intelligent analysis of that code that should fail the build as surely as failing tests if issues crop up.  In a zero day deployment context, you cannot afford best practice violations.  You cannot afford slipping quality, mounting technical debt, and you most certainly cannot afford code rot.  Today's rot in a zero day deployment scenario means tomorrow's inability to deploy that way.

Learn more how CodeIt.Right can help you automate code reviews, improve your code quality, and reduce technical debt.

About the Author

Erik Dietrich

I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

posted on Monday, 23 January 2017 08:48:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
 Thursday, 12 January 2017

A little while back, I started a post series explaining some of the CodeIt.Right rules.  I led into the post with a narrative, which I won't retell.  But I will reiterate the two rules that I follow when it comes to static analysis tooling.

  • Never implement a suggested fix without knowing what makes it a fix.
  • Never ignore a suggested fix without understanding what makes it a fix.

Because I follow these two rules, I find myself researching every fix suggested to me by my tooling.  And, since I've gone to the trouble of doing so, I'll save you that same trouble by explaining some of those rules today.  Specifically, I'll examine 3 more CodeIt.Right rules today and explain the rationale behind them.

Mark assemblies CLSCompliant

If you develop in .NET, you've no doubt run across this particular warning at some point in your career.  Before we get into the details, let's stop and define the acronyms.  "CLS" stands for "Common Language Specification," so the warning informs you that you need to mark your assemblies "Common Language Specification Compliant" (or non-compliant, if applicable).

Okay, but what does that mean?  Well, you can easily forget that many programming languages target the .NET runtime besides your language of choice.  CLS compliance indicates that any language targeting the runtime can use your assembly.  You can write language specific code, incompatible with other framework languages.  CLS compliance means you haven't.

Want an example?  Let's say that you write C# code and that you decide to get cute.  You have a class with a "DoStuff" method, and you want to add a slight variation on it.  Because the new method adds improved functionality, you decide to call it "DOSTUFF" in all caps to indicate its awesomeness.  No problem, says the C# compiler.

And yet, if you you try to do the same thing in Visual Basic, a case insensitive language, you will encounter a compiler error.  You have written C# code that VB code cannot use.  Thus you have written non-CLS compliant code.  The CodeIt.Right rule exists to inform you that you have not specified your assembly's compliance or non-compliance.

To fix, go specify.  Ideally, go into the project's AssemblyInfo.cs file and add the following to call it a day.

[assembly:CLSCompliant(true)]

But you can also specify non-compliance for the assembly to avoid a warning.  Of course, you can do better by marking the assembly compliant on the whole and then hunting down and flagging non-compliant methods with the attribute.

Specify IFormatProvider

Next up, consider a warning to "specify IFormatProvider."  When you encounter this for the first time, it might leave you scratching your head.  After all, "IFormatProvider" seems a bit... technician-like.  A more newbie-friendly name for this warning might have been, "you have a localization problem."

For example, consider a situation in which some external supplies a date.  Except, they supply the date as a string and you have the task of converting it to a proper DateTime so that you can perform operations on it.  No problem, right?

var properDate = DateTime.Parse(inputString);

That should work, provided provincial concerns do not intervene.  For those of you in the US, "03/02/1995" corresponds to March 2nd, 1995.  Of course, should you live in Iraq, that date string would correspond to February 3rd, 1995.  Oops.

Consider a nightmare scenario wherein you write some code with this parsing mechanism.  Based in the US and with most of your customers in the US, this works for years.  Eventually, though, your sales group starts making inroads elsewhere.  Years after the fact, you wind up with a strange bug in code you haven't touched for years.  Yikes.

By specifying a format provider, you can avoid this scenario.

Nested types should not be visible

Unlike the previous rule, this one's name suffices for description.  If you declare a type within another type (say a class within a class), you should not make the nested type visible outside of the outer type.  So, the following code triggers the warning.

public class Outer
{
    public class Nested
    {

    }
}

To understand the issue here, consider the object oriented principle of encapsulation.  In short, hiding implementation details from outsiders gives you more freedom to vary those details later, at your discretion.  This thinking drives the rote instinct for OOP programmers to declare private fields and expose them via public accessors/mutators/properties.

To some degree, the same reasoning applies here.  If you declare a class or struct inside of another one, then presumably only the containing type needs the nested one.  In that case, why make it public?  On the other hand, if another type does, in fact, need the nested one, why scope it within a parent type and not just the same namespace?

You may have some reason for doing this -- something specific to your code and your implementation.  But understand that this is weird, and will tend to create awkward, hard-to-discover code.  For this reason, your static analysis tool flags your code.

Until Next Time

As I said last time, you can extract a ton of value from understanding code analysis rules.  This goes beyond just understanding your tooling and accepted best practice.  Specifically, it gets you in the habit of researching and understanding your code and applications at a deep, philosophical level.

In this post alone, we've discussed language interoperability, geographic maintenance concerns, and object oriented design.  You can, all too easily, dismiss analysis rules as perfectionism.  They aren't; they have very real, very important applications.

Stay tuned for more posts in this series, aimed at helping you understand your tooling.

Learn more how CodeIt.Right can help you automate code reviews and improve your code quality.

About the Author

Erik Dietrich

I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

posted on Thursday, 12 January 2017 10:32:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
 Monday, 26 December 2016

The v3.0 of CodeIt.Right v3 is here – the new major version of our automated code review and code quality analysis product. Here are the v3.0 new feature highlights:

  • VS2017 RC integration
  • Official support for VS2015 Update 3 and ASP.NET 5/ASP.NET Core 1.0 solutions
  • Solution filtering by date, source control status and file patterns
  • Summary report view - provides a summary view of the analysis results and metrics, customize to your needs
  • New Review Code commands – review opened files and review checked out files
  • Improved Profile Editor with advanced rule search and filtering
  • Improved look and feel for Violations Report and Editor violation markers
  • Setting to keep the OnDemand and Instant Review profiles in sync
  • New Jenkins integration plugin
  • Batch correction is now turned off by default
  • Most every CodeIt.Right action now can be assigned a keyboard shortcut
  • New rules

For the complete and detailed list of the v3.0 changes see What's New in CodeIt.Right v3.0


Solution Filtering

The solution filtering feature allows to narrow the code review scope to using the following options:

  • Analyze files modified Today/This Week/Last 2 Weeks/This Month – so you can set the relative date once and not have to change the date every day
  • Analyze files modified since specific date
  • Analyze files opened in Visual Studio tabs
  • Analyze files checked out from the source control
  • Analyze only specific files – only include the files that match a list of file patters like *Core*.cs or Modules\*. See this KB post for the file path patterns details and examples.

cir-v3-solution-filtering

New Review Code commands

We have changed the Start Analysis menu to Review Code – still the same feature and the new name is just highlighting the automated code review nature of the product. Also added the following Review Code commands:

  • Analyze Open Files menu - analyze only the files opened in Visual Studio tabs
  • Analyze Checked Out Files menu - analyze only files that that are checked out from the source control

cir-v3-profile-filterImproved Profile Editor

The Profile Editor now features

  • Advanced rule filtering by rule id, title, name, severity, scope, target, and programming language
  • Allows to quickly show only active, only inactive or all rules in the profile
  • Shows totals for the profile rules - total, active, and filtered
  • Improved adding rules with multiple categories

 

Summary Report

The Summary Report tab provides an overview of the analyzed source code quality, it includes the high level summary of the current analysis information, filters, violation summary, top N violation, solution info and metrics. Additionally it provides detailed list of violations and excludes.

The report is self-contained – no external dependencies, everything it requires is included within the html file. This makes it very easy to email the report to someone or publish it on the team portal – see example.

cir-v3-summary-report-part

The Summary Report is based on an ASP.NET Razor markup within the Summary.cshtml template. This makes it very easy for you to customize it to your needs.

You will find the summary report API documentation in the help file – CodeIt.Right –> Help & Support –> Help –> Summary Report API.

cir-v3-summary-source

 

How do I try it?

Download the v5.0 at http://submain.com/download/codeit.right/

Feedback is what keeps us going!

Let us know what you think of the new version here - http://submain.com/support/feedback/


Note to the CodeIt.Right v2 users
: The v2.x license codes won't work with the v3.0. For users with active Software Assurance subscription we have sent out the v3.x license codes. If you have not received or misplaced your new license, you can retrieve it on the My Account page. Users with expired Software Assurance subscription will need to purchase the new version - currently we are not offering upgrade path other than the Software Assurance subscription. For information about the upgrade protection see our Software Assurance and Support - Renewal / Reinstatement Terms

posted on Monday, 26 December 2016 09:12:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
 Tuesday, 29 November 2016

I've heard tell of a social experiment conducted with monkeys.  It may or may not be apocryphal, but it illustrates an interesting point.  So, here goes.

Primates and Conformity

A group of monkeys inhabited a large enclosure, which included a platform in the middle, accessible by a ladder.  For the experiment, their keepers set a banana on the platform, but with a catch.  Anytime a monkey would climb to the platform, the action would trigger a mechanism that sprayed the entire cage with freezing cold water.

The smarter monkeys quickly figured out the correlation and actively sought to prevent their cohorts from triggering the spray.  Anytime a monkey attempted to climb the ladder, they would stop it and beat it up a bit by way of teaching a lesson.  But the experiment wasn't finished.

Once the behavior had been established, they began swapping out monkeys.  When a newcomer arrived on the scene, he would go for the banana, not knowing the social rules of the cage.  The monkeys would quickly teach him, though.  This continued until they had rotated out all original monkeys.  The monkeys in the cage would beat up the newcomers even though they had never experienced the actual negative consequences.

Now before you think to yourself, "stupid monkeys," ask yourself how much better you'd fare.  This video shows that humans have the same instincts as our primate cousins.

Static Analysis and Conformity

You might find yourself wondering why I told you this story.  What does it have to do with software tooling and static analysis?

Well, I find that teams tend to exhibit two common anti-patterns when it comes to static analysis.  Most prominently, they tune out warnings without due diligence.  After that, I most frequently see them blindly implement the suggestions.

I tend to follow two rules when it comes to my interaction with static analysis tooling.

  • Never implement a suggested fix without knowing what makes it a fix.
  • Never ignore a suggested fix without understanding what makes it a fix.

You syllogism buffs out there have, no doubt, condensed this to a single rule.  Anytime you encounter a suggested fix you don't understand, go learn about it.

Once you understand it, you can implement the fix or ignore the suggestion with eyes wide open.  In software design/architecture, we deal with few clear cut rules and endless trade-offs.  But you can't speak intelligently about the trade-offs without knowing the theory behind them.

Toward that end, I'd like to facilitate that warning for some CodeIt.Right rules today.  Hopefully this helps you leverage your tooling to its full benefit.

Abstract types should not have public constructors

First up, consider the idea of abstract types with public constructors.

public abstract class Shape
{
    protected ConsoleColor _color;

    public Shape(ConsoleColor color)
    {
        _color = color;
    }
}

public class Square : Shape
{
    public int SideLength { get; set; }
    public Square(ConsoleColor color) : base(color) { }

}

CodeIt.Right will ding you for making the Shape constructor public (or internal -- it wants protected).  But why?

Well, you'll quickly discover that CodeIt.Right has good company in the form of the .NET Framework guidelines and FxCop rules.  But that just shifts the discussion without solving the problem.  Why does everyone seem not to like this code?

First, understand that you cannot instantiate Shape, by design.  The "abstract" designation effectively communicates Shape's incompleteness.  It's more of a template than a finished class in that creating a Shape makes no sense without the added specificity of a derived type, like Square.

So the only way classes outside of the inheritance hierarchy can interact with Shape indirectly, via Square.  They create Squares, and those Squares decide how to go about interacting with Shape.  Don't believe me?  Try getting around this.  Try creating a Shape in code or try deleting Square's constructor and calling new Square(color).  Neither will compile.

Thus, when you make Shape's constructor public or internal, you invite users of your inheritance hierarchy to do something impossible.  You engage in false advertising and you confuse them.  CodeIt.Right is helping you avoid this mistake.

Do not catch generic exception types

Next up, let's consider the wisdom, "do not catch generic exception types."  To see what that looks like, consider the following code.

public bool MergeUsers(int user1Id, int user2Id)
{
    try
    {
        var user1 = _userRepo.Get(user1Id);
        var user2 = _userRepo.Get(user2Id);
        user1.MergeWith(user2);
        _userRepo.Save(user1);
        _userRepo.Delete(user2);
        return true;
    }
    catch(Exception ex)
    {
        _logger.Log($"Exception {ex.Message} occurred.");
        return false;
    }
}

Here we have a method that merges two users together, given their IDs.  It accomplishes this by fetching them from some persistence ignorance scheme, invoking a merge operation, saving the merged one and deleting the vestigial one.  Oh, and it wraps the whole thing in a try block, and then logs and returns false should anything fail.

And, by anything, I mean absolutely anything.  Business rules make merge impossible?  Log and return false.  Server out of memory?  Log it and return false.  Server hit by lightning and user data inaccessible?  Log it and return false.

With this approach, you encounter two categories of problem.  First, you fail to reason about or distinguish among the different things that might go wrong.  And, secondly, you risk overstepping what you're equipped to handle here.  Do you really want to handle fatal system exceptions right smack in the heart of the MergeUsers business logic?

You may encounter circumstances where you want to handle everything, but probably not as frequently as you think.  Instead of defaulting to this catch all, go through the exercise of reasoning about what could go wrong here and what you want to handle.

Avoid language specific type names in parameters

If you see this violation, you probably have code that resembles the following.  (Though, hopefully, you wouldn't write this actual method)

public int Add(int xInt, int yInt)
{
    return xInt + yInt;
}

CodeIt.Right does not like the name "int" in the parameters and this reflects a .NET Framework guideline.

Here, we find something a single language developer may not stop to consider.  Specifically, not all languages that target the .NET framework use the same type name conveniences.  You say "int" and a VB developer says "Integer."  So if a VB developer invokes your method from a library, she may find this confusing.

That said, I would like to take this one step further and advise that you avoid baking types into your parameter/variable names in general.  Want to know why?  Let's consider a likely outcome of some project manager coming along and saying, "we want to expand the add method to be able to handle really big numbers."  Oh, well, simple enough!

public long Add(long xInt, long yInt)
{
    return xInt + yInt;
}

You just needed to change the datatypes to long, and voilà!  Everything went perfectly until someone asked you at code review why you have a long called "xInt."  Oops.  You totally didn't even think about the variable names.  You'll be more careful next time.  Well, I'd advise avoiding "next time" completely by getting out of this naming habit.  The IDE can tell you the type of a variable -- don't encode it into the name redundantly.

Until Next Time

As I said in the introductory part of the post, I believe huge value exists in understanding code analysis rules.  You make better decisions, have better conversations, and get more mileage out of the tooling.  In general, this understanding makes you a better developer.  So I plan to continue with these explanatory posts from time to time.  Stay tuned!

Learn more how CodeIt.Right can help you automate code reviews and improve your code quality.

About the Author

Erik Dietrich

I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

posted on Tuesday, 29 November 2016 09:55:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
 Thursday, 17 November 2016

We have just made available the Release Candidate of CodeIt.Right v3.0, here is the new feature highlights:

  • VS2017 RC integration
  • Solution filtering by date, source control status and file patterns
  • Summary report view (announced as the Dashboard in the Beta preview) - provides a summary view of the analysis results and metrics, customize to your needs

These features were announced as part of our recent v3 Beta:

  • Official support for VS2015 Update 2 and ASP.NET 5/ASP.NET Core 1.0 solutions
  • New Review Code commands:
    • only opened files
    • only checked out files
    • only files modified after specific date
  • Improved Profile Editor with advanced rule search and filtering
  • Improved look and feel for Violations Report and Editor violation markers
  • New rules
  • Setting to keep the OnDemand and Instant Review profiles in sync
  • New Jenkins integration plugin
  • Batch correction is now turned off by default
  • Most every CodeIt.Right action now can be assigned a keyboard shortcut
  • For the Beta changes and screenshots, please see Overview of CodeIt.Right v3.0 Beta Features

For the complete and detailed list of the v3.0 changes see What's New in CodeIt.Right v3.0

To give the v3.0 Release Candidate a try, download it here - http://submain.com/download/codeit.right/beta/


Solution Filtering

In addition to the solution filtering by modified since specific date, open and checked out files available in the Beta, we are introducing few more options:

  • Analyze files modified Today/This Week/Last 2 Weeks/This Month – so you can set the relative date once and not have to change the date every day
  • Analyze only specific files – only include the files that match a list of file patters like *Core*.cs or Modules\*. See this KB post for the file path patterns details and examples.

cir-v3-solution-filtering

Summary Report

The Summary Report tab provides an overview of the analyzed source code quality, it includes the high level summary of the current analysis information, filters, violation summary, top N violation, solution info and metrics. Additionally it provides detailed list of violations and excludes.

The report is self-contained – no external dependencies, everything it requires is included within the html file. This makes it very easy to email the report to someone or publish it on the team portal – see example.

cir-v3-summary-report-part

The Summary Report is based on an ASP.NET Razor markup within the Summary.cshtml template. This makes it very easy for you to customize it to your needs.

You will find the summary report API documentation in the help file – CodeIt.Right –> Help & Support –> Help –> Summary Report API.

cir-v3-summary-source

 

Feedback

We would love to hear your feedback on the new features! Please email it to us at support@submain.com or post in the CodeIt.Right Forum.

posted on Thursday, 17 November 2016 08:55:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
 Saturday, 05 November 2016
blog-so-you’ve-inherited-a-legacy-codebase

During my younger days, I worked for a company that made a habit of a strategic acquisition.  They didn't participate in Time Warner style mergers, but periodically they would purchase a smaller competitor or a related product.  And on more than one occasion, I inherited the lead role for the assimilating software from one of these organizations.  Lucky me, right?

If I think in terms of how to describe this to someone, a plumbing analogy comes to mind.  Over the years, I have learned enough about plumbing to handle most tasks myself.  And this has exposed me to the irony of discovering a small leak in a fitting plugged by grit or debris.  I find this ironic because two wrongs make a right.  A dirty, leaky fitting reaches sub-optimal equilibrium, and you spring a leak when you clean it.

Legacy codebases have this issue as well.  You inherit some acquired codebase, fix a tiny bug, and suddenly the defect floodgates open.  And then you realize the perilousness of your situation.

While you might not have come by it in the same way that I did, I imagine you can relate.  At some point or another, just about every developer has been thrust into supporting some creaky codebase.  How should you handle this?

Put Your Outrage in Check

First, take some deep breaths.  Seriously, I mean it.  As software developers, we seem to hate code written by others.  In fact, we seem to hate our own code if we wrote it more than a few months ago.  So when you see the legacy codebase for the first time, you will feel a natural bias toward disgust.

But don't indulge it.  Don't sit there cursing the people that wrote the code, and don't take screenshots to send to the Daily WTF.  Not only will it do you no good, but I'd go so far as to say that this is actively counterproductive.  Deciding that the code offers nothing worth salvaging makes you less inclined to try to understand it.

The people that wrote this code dealt with older languages, older tooling, older frameworks, and generally less knowledge than we have today.  And besides, you don't know what constraints they faced.  Perhaps bosses heaped delivery pressure on them like crazy.  Perhaps someone forced them to convert to writing in a new, unfamiliar language.  Whatever the case may be, you simply didn't walk in their shoes.  So take a breath, assume they did their best, and try to understand what you have under the hood.

Get a Visualization of the Architecture

Once you've settled in mentally for this responsibility, seek to understand quickly.  You won't achieve this by cracking open the code and looking through random source files.  But, beyond that, you also won't achieve it by looking at their architecture documents or folder structures.  Reality gets out of sync with intention, and those things start to lie.  You need to see the big picture, but in a way that lines up with reality.

Look for tools that map dependencies and can generate a visual of the codebase.  Plenty of these tools exist for you and can automate visual depictions.  Find one and employ it.  This will tell you whether the architecture resembles the neat diagram given to you or not.  And, more importantly, it will get you to a broad understanding much more quickly.

Characterize

Once you have the picture you need of the codebase and the right frame of mind, you can start doing things to it.  And the first thing you should do is to start writing characterization tests.

If you have not heard of them before, characterization tests have the purpose of, well, characterizing the codebase.  You don't worry about correct or incorrect behaviors.  Instead, you accept at face value what the code does, and document those behaviors with tests.  You do this because you want to get a safety net in place that tells you when your changes affect inputs and outputs.

As this XKCD cartoon ably demonstrates, someone will come to depend on the application's production behavior, however problematic.  So with legacy code, you cannot simply decide to improve a behavior and assume your users will thank you.  You need to exercise caution.

But characterization tests do more than just provide a safety net.  As an exercise, they help you develop a deeper understanding of the codebase.  If the architectural visualization gives you a skeleton understanding, this starts to put meat on the bones.

Isolate Problems

With a reliable safety net in place, you can begin making strategic changes to the production code beyond simple break/fix.  I recommend that you start by finding and isolating problematic chunks of code.  In essence, this means identifying sources of technical debt and looking to improve, gradually.

This can mean pockets of global state or extreme complexity that make for risky change.  But it might also mean dependencies on outdated libraries, frameworks, or APIs.  In order to extricate yourself from such messes, you must start to isolate them from business logic and important plumbing code.  Once you have it isolated, fixes will come more easily.

Evolve Toward Modernity

Once you've isolated problematic areas and archaic dependencies, it certainly seems logical to subsequently eliminate them.  And, I suggest you do just that as a general rule.  Of course, sometimes isolating them gives you enough of a win since it helps you mitigate risk.  But I would consider this the exception and not the rule.  You want to remove problem areas.

I do not say this idly nor do I say it because I have some kind of early adopter drive for the latest and greatest.  Rather, being stuck with old tooling and infrastructure prevents you from taking advantage of modern efficiencies and gains.  When some old library prevents you from upgrading to a more modern language version, you wind up writing more, less efficient code.  Being stuck in the past will cost you money.

The Fate of the Codebase

As you get comfortable and take ownership of the legacy codebase, never stop contemplating its fate.  Clearly, in the beginning, someone decided that the application's value outweighed its liability factor, but that may not always continue to be true.  Keep your finger on the pulse of the codebase, while considering options like migration, retirement, evolution, and major rework.

And, finally, remember that taking over a legacy codebase need not be onerous.  As initially shocked as I found myself with the state of some of those acquisitions, some of them turned into rewarding projects for me.  You can derive a certain satisfaction from taking over a chaotic situation and gradually steer it toward sanity.  So if you find yourself thrown into this situation, smile, roll up your sleeves, own it and make the best of it.

Related resources

Tools at your disposal

SubMain offers CodeIt.Right that easily integrates into Visual Studio for flexible and intuitive automated code review solution that works real-time, on demand, at the source control check-in or as part of your build.

Learn more how CodeIt.Right can identify technical debt, document it and gradually improve the legacy code.

About the Author

Erik Dietrich

I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

    posted on Saturday, 05 November 2016 10:43:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Wednesday, 19 October 2016

    The balance among types of feedback drives some weird interpersonal dynamics and balances.  For instance, consider the rather trite (if effective) management technique of the "compliment sandwich."  Managers with a negative piece of feedback precede and follow that feedback with compliments.  In that fashion, the compliments form the "bun."

    Different people and different groups have their preferences for how to handle this.  While some might bend over backward for diplomacy others prefer environments where people hurl snipes at one another and simply consider it "passionate debate."  I have no interest arguing for any particular approach -- only in pointing out the variety.  As it turns out, we humans find this subject thorny.

    To some extent, this complicated situation extends beyond human boundaries and into automated systems.  While we might not take quite the same umbrage as we would with humans, we still get frustrated.  If you doubt this, I challenge you to tell me that you have never yelled at a compiler because you were sure your code had no errors.  I thought so.

    So from this perspective, I can understand the frustration with static analysis feedback.  Often, when you decide to enable a new static analysis engine or linting tool on a codebase, the feedback overwhelms.  28,326 issues the code can demoralize anyone.  And so the temptation emerges to recoil from this feedback and turn off the tool.

    But should you do this?  I would argue that usually, you should not.  But situations do exist when disabling a static analyzer makes sense.  Today, I'll walk through some examples of times you might suppress such a warning.

    False Positives

    For the first example, I'll present something of a no-brainer.  However, I will also present a caveat to balance things.

    If your static analysis tool presents you with a false positive, then you should suppress that instance of the false positive.  (No sense throwing the baby out with the bathwater and suppressing the entire rule).  Assuming that you have a true false positive, the analysis warning simply constitutes noise and not signal.  Get rid of it.

    That being said, take care with labeling warnings as false positives.  False positive means that the tool has indicated a problem and a potential error and gotten it wrong.  False positive does not mean that you disagree with the warning or don't care.  The tool's wrongness is a good reason to suppress -- you not liking its prognosis false short of that.

    Non-Applicable Code

    For the second kind of instance, I'll use the term "non-applicable code."  This describes code for which you have no interest in static analysis warnings.  While this may sound contradictory to the last point, it differs subtly.

    You do not control all code in your codebase, and not all code demands the same level of scrutiny about the same concepts.  For example, do you have code in your codebase driven by a framework?  Many frameworks force some sort of inheritance scheme on you or the implementation of an interface.  If the name of a method on a third party interface violates a naming convention, you need not be dinged by your tool for simply implementing it.

    In general, you'll find warnings that do not universally apply.  Test projects differ from your production code.  GUI projects differ from data access layer ones.  And NuGet packages or generated code remain entirely outside of your control.  Assuming the decision to use these things happened in the past, turning off the analysis warnings makes sense.

    Cosmetic Code Counter to Your Team's Standard

    So far, I've talked about the tool making a mistake and the tool getting things right on the wrong code.  This third case presents a thematically similar consideration.  Instead of a mistake or misapplication, though, this involves a misfit.

    Many tools out there offer purely cosmetic concerns.  They'll flag field variables not prepended with underscores or methods with camel casing instead of Pascal casing.  Assuming those jive with your team's standards, you have no issues.  But if they don't, you have two options: change the tool or change your standard.  Generally speaking, you probably want to err on the side of complying with broad standards.  But if your team is set with your standard, then turn off those warnings or configure the tool.

    When You're Buried in Warnings

    Speaking of warnings, I'll offer another point that relates to them, but with an entirely different theme.  When your team is buried in warnings, you need to take action.

    Before I talk about turning off warnings, however, consider fixing them en masse.  It may seem daunting, but I suspect that you might find yourself surprised at how quickly you can wrangle a manageable number.

    However, if this proves too difficult or time-consuming, consider force ranking the warnings, and (temporarily) turning off all except the top, say, 200.  Make it part of your team's work to eliminate those, and then enable the next 200.  Keep at it until you eliminate the warnings.  And remember, in this case, you're disabling warnings only temporarily.  Don't forget about them.

    When You Have an Intelligent Disagreement

    Last up comes the most perilous reason for turning off static analysis warnings.  This one also happens to occur most frequently, in my experience.  People turn them off because they know better than the static analysis tool.

    Let's stop for a moment and contemplate this.  Teams of workaday developers out there tend to blithely conclude that they know their business.  In fact, they know their business better than people whose job it is to write static analysis tools that generate these warnings.  Really?  Do you like those odds?

    Below the surface, disagreement with the tool often masks resentment at being called "wrong" or "non-compliant."  Turning the warnings off thus becomes a matter of pride or mild laziness.  Don't go this route.

    If you want to ignore warnings because you believe them to be wrong, do research first.  Only allow yourself to turn off warnings when you have a reasoned, intelligent, research-supported argument as to why you should do so.

    When in Doubt, Leave 'em On

    In this post, I have gingerly walked through scenarios in which you may want to turn off static analysis warnings and guidance.  For me, this exercise produces some discomfort because I rarely find this advisable.  My default instinct is thus not to encourage such behavior.

    That said, I cannot deny that you will encounter instances where this makes sense.  But whatever you do, avoid letting this become common or, worse, your default.  If you have the slightest bit of doubt, leave them on.   Put your trust in the vendors of these tools -- they know their business.  And steering you in bad directions is bad for business.

    Learn more how CodeIt.Right can automate your team standards, makes it easy to ignore specific guidance violations and keep track of them.

    About the Author

    Erik Dietrich

    I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

    posted on Wednesday, 19 October 2016 16:19:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Tuesday, 11 October 2016

    More years ago than I'd care to admit, I took a software engineering course as part of my graduate CS program.  At the time, I worked a full-time job during the day and did remote classes in the evening.  As a result, I disproportionately valued classes with applicability to my job.  And this class offered plenty of that.

    We scratched the surface on such diverse topics as agile methodologies, automated testing, cost of code ownership, and more.  But I found myself perhaps most interested by the dive we did into refactoring.  The idea of reworking the internal structure of code while preserving inputs and outputs is a surprisingly complex one.

    Historical Complexity of Refactoring

    At the risk of dating myself, I took this course in the fall of 2006.  While automated refactorings in your IDE now seem commonplace, back then, they were hard.  In fact, the professor of the course considered them to be sufficiently difficult as to steer a group of mine away from a project implementing some.  In the world of 2006, I suspect he had the right of it.  We steered clear.

    In 2016, implemented automated refactorings still present a challenge.  But modern tool and IDE vendors can stand on the shoulders of giants, so to speak.  Back then?  Not so much.

    Refactorings present a unique challenge to tool vendors because of the inherent risk.  They can really screw up users' code.  If a mistake happens, best case scenario is that the resultant code fails to compile because then, at least, it fails fast.  Worse still is semantically and syntactically correct code that somehow behaves improperly.  In this situation, a refactoring -- a safe change to code -- becomes a modification to the behavior of production code instead.  Ouch.

    On top of the risk, the implementation of refactoring anywhere beyond the trivial involves heady concepts such as abstract syntax trees.  In other words, it's not for lightweights.  So to recap, refactoring is risky and difficult.  And this is the landscape faced by tool authors.

    I Don't Fix -- I Just Flag

    If you live in the US, you may have seen a commercial that features a funny quip.  If I'm not mistaken, it advertises for some sort of fraud prevention services.  (Pardon any slight inaccuracies, as I recount this as best I can, from memory.)

    In the ad, bank robbers hold a bank hostage in a rather cliché, dramatic scene.  Off to the side, a woman stands near a security guard, asking him why he didn't do anything to stop it.  "I'm not a robbery prevention service -- I'm a robbery monitoring service.  Oh, by the way, there's a robbery." (here is a copy of the commercial)

    It brings a chuckle, but it also brings an underlying point.  In many situations, monitoring alone can prove woefully ineffective, prompting frustration.  As a former manager and current consultant, I generally advise people that they should only point out problems when they have also prepared proposed solutions.  It can mean the difference between complaining and solving.

    So you can imagine and probably share my frustration at tools that just flag problems and leave it to you to investigate further and fix them.  We feel like the woman standing next to the "robbery monitor," wondering how useful the service is to us.

    Levels of Solution

    Going back to the subject of software development, we see this dynamic in a number of places.  The compiler, the IDE, productivity add-ins, static analysis tools, and linting utilities all offer us warnings to heed.

    Often, that's all we get.  The utility says, "hey, something is wrong here, but you're going to have to figure out what."  I tend to think of that as the basic level of service, or level 0, if you will.

    The next level, level 1, involves at least offering some form of next action.  It might be as simple as offering a help file, inline reading, or a link to more information.  Anything above "this is a problem."

    Level 2 ups the ante by offering a recommendation for what to do next.  "You have a dependency cycle.  You should fix this by looking at these three components and removing one mutual dependency."  It goes beyond giving you a next thing to do and gives you the next thing to do.

    Level 3 rounds out the field by actually performing the action for you (following a prompt, of course).  "You've accidentally hidden a method on the parent class.  Click here to rename or click here to make parent virtual."  That's just an example off the top, of course, but it illustrates the interaction paradigm.  "We've noticed a problem, and you can click here to fix it."

    Fixes in Your Tooling

    blog-dont-just-flag-it-fix-it-irWhen evaluating your own tools, look to climb as high up this hierarchy as you can.  Favor tools that identify problems, but offer fixes whenever possible.

    There are a number of such tools out there, including CodeIt.Right.  Using tools like this is a pleasure because it removes the burden of research and implementation from you.  Well, you can always do the research if you want, but at your own leisure.  But it's much better to do research at your leisure than when you're trying to accomplish something else.

    The other, important concern here is that you find trusted tooling to help you with this sort of thing.  After all, you don't want something messing with your source code if it might mess up your source code.  But, assuming you can trust it, this provides an invaluable boost to your effectiveness by automatically resolving your problems and by helping you learn.

    In the year 2016, we have far more tooling available, with a far better track record, than we did in 2006.  Leverage it whenever possible so that you can focus on solving the pressing problems of your day to day work.

    Tools at your disposal

    SubMain offers CodeIt.Right that easily integrates into Visual Studio for flexible and intuitive "We've noticed a problem, and you can click here to fix it." solution.

    Learn more how CodeIt.Right can automate your team standards and improve code quality.

    About the Author

    Erik Dietrich

    I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

    posted on Tuesday, 11 October 2016 08:41:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Thursday, 29 September 2016

    In professional contexts, I think that the word "standard" has two distinct flavors.  So when we talk about a "team standard" or a "coding standard," the waters muddy a bit.  In this post, I'm going to make the case for a team standard.  But before I do, I think it important to discuss these flavors that I mention.  And keep in mind that we're not talking dictionary definition as much as the feelings that the word evokes.

    blog-case-for-team-standardFirst, consider standard as "common."  To understand what I mean, let's talk cars.  If you go to buy a car, you can have an automatic transmission or a standard transmission.  Standard represents a weird naming choice for this distinction since (1) automatic transmissions dominate (at least in the US) and (2) "manual" or "stick-shift" offer much better descriptions.  But it's called "standard" because of historical context.  Once upon a time, automatic was a new sort of upgrade, so the existing, default option became boringly known as "standard."

    In contrast, consider standard as "discerning."  Most commonly you hear this in the context of having standards.  If some leering, creepy person suggested you go out on a date to a fast food restaurant, you might rejoin with, "ugh, no, I have standards."

    Now, take these common contexts for the word to the software team room.  When someone proposes coding standards, the two flavors make themselves plain in the team members' reactions.  Some like the idea, and think, "it's important to have standards and take pride in our work."  Others hear, "check your creativity at the gate, because around here we write standard, default code."

    What I Mean by Standard

    Now that I've drawn the appropriate distinction, I feel it appropriate to make my case.  When I talk about the importance of a standard, I speak with the second flavor of the word in mind.  I speak about the team looking at its code with a discerning attitude.  Not just any code can make it in here -- we have standards.

    These can take somewhat fluid forms, and I don't mean to be prescriptive.  The sorts of standards that I like to see apply to design principles as much as possible and to cosmetic concerns only when they have to.

    For example, "all non-GUI code should be test driven" and "methods with more than 20 lines should require a conversation to justify them" represent the sort of standards I like my teams to have.  They say, "we believe in TDD" and "we view long methods as code smells," respectively.  In a way, they represent the coding ethos of the group.

    On the other side of the fence lie prescriptions like, "all class fields shall be prepended with underscores" and "all methods shall be camel case."  I consider such concerns cosmetic, since they are appearance and not design or runtime behavior.  Cosmetic concerns are not important... unless they are.  If the team struggles to read code and becomes confused because of inconsistency, then such concerns become important.  If the occasional quirk presents no serious readability issues, then prescriptive declarations about it stifle more than they help.

    Having standards for your team's work product does not mean mandating total homogeneity.

    Why Have a Standard at All?

    Since I'm alluding to the potentially stifling effects of a team standard, you might reasonably ask why we should have them at all.  I can assert that I'm interested in the team being discerning, but is it really just about defining defaults?  Fair enough.  I'll make my case.

    First, consider something that I've already mentioned: maintenance.  If the team can easily read code, it can more easily maintain that code.  Logically, then, if the team all writes fairly similar code, they will all have an easier time reading, and thus maintaining that code.  A standard serves to nudge teams in this direction.

    Another important benefit of the team standard revolves around the integrity of the work product.  Many team's standards incorporate methodology for security, error handling, logging, etc.  Thus the established standard arms the team members with ways to ensure that the software behaves properly.

    And finally, well-done standards can help less experienced team members learn their craft.  When such people join the team, they tend to look to established folks for guidance.  Sadly, those people often have the most on their plate and the least time.  The standard can thus serve as teacher by proxy, letting everyone know the team's expectations for good code.

    Forget the Conformity (by Automating)

    So far, all of my rationale follows a fairly happy path.  Adopt a team standard, and reap the rewards: maintainability, better software, learning for newbies.  But equally important is avoiding the dark side of team standards.  Often this dark side takes the form of nitpicking, micromanagement and other petty bits of nastiness.

    Please, please, please remember that a standard should not elevate conformity as a virtue.  It should represent shared values and protection of work product quality.  Therefore, in situations where conformity (uniformity) is justified, you should automate it.  Don't make your collaborative time about telling people where to put spaces and brackets -- program your IDE to do that for you.

    Make Justification Part of the Standard

    Another critical way to remove the authoritarian vibe from the team standard is one that I rarely see.  And that mystifies me a bit because you can do it so easily.  Simply make sure you justify each item contained in the standard.

    "Methods with more than 20 line of code should prompt a conversation," might find a home in your standard.  But why not make it, "methods with more than 20 lines of code should prompt a conversation because studies have demonstrated that defect rate increases more than linearly with lines of code per method?"  Wow, talk about powerful.

    This little addition takes the authoritarian air out of the standard, and it also helps defuse squabbles.  And, best of all, people might just learn something.

    If you start doing this, you might also notice that boilerplate items in a lot of team standards become harder to justify.  "Prepend your class fields with m underscore" becomes "prepend your class fields with m underscore because... wait, why do we do that again?"

    Prune and Always Improve

    When you find yourself trailing off at because, you have a problem.  Something exists in your team standard that you can't justify.  If no one can justify it, then rip it out.  Seriously, get rid of it.  Having items that no one can justify starts to put you in conformity for the sake of conformity territory.  And that's when standard goes from "discerning" to "boring."

    Let this philosophy guide your standard in general.  Revisit it frequently, and audit it for valid justifications.  Sometimes justifications will age out of existence or seem lame in retrospect.  When this happens, do not hesitate to revisit, amend, or cull.  The best team standards are neither boring nor static.  The best team standards reflect the evolving, growing philosophy of the team.

    Related resources

    Tools at your disposal

    SubMain offers CodeIt.Right that easily integrates into Visual Studio for flexible and intuitive automated code review solution that works real-time, on demand, at the source control check-in or as part of your build.

    Learn more how CodeIt.Right can automate your team standards and improve code quality.

    About the Author

    Erik Dietrich

    I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

    posted on Thursday, 29 September 2016 07:41:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Tuesday, 20 September 2016

    If you write software, the term "feedback loop" might have made its way into your vocabulary.  It charts a slightly indirect route from its conception and into the developer lexicon, though, so let's start with the term's origin.  A feedback loop in general systems uses its output as one of its inputs.

    Kind of vague, huh?  I'll clarify with an example.  I'm actually writing this post from a hotel room, so I can see the air conditioner from my seat.  Charlotte, North Carolina, my temporary home, boasts some pretty steamy weather this time of year, so I'm giving the machine a workout.  Its LED display reads 70 Fahrenheit, and it's cranking to make that happen.

    When the AC unit hits exactly 70 degrees, as measured by its thermostat, it will take a break.  But as soon as the thermostat starts inching toward 71, it will turn itself back on and start working again.  Such is the Sisyphean struggle of climate control.

    Important for us here, though, is the mechanics of this system.  The AC unit alters the temperature in the room (its output).  But it also uses the temperature in the room as input (if < 71, do nothing, else cool the room).  Climate control in buildings operates via feedback loop.

    Appropriating the Term for Software Development

    It takes a bit of a cognitive leap to think of your own tradecraft in terms of feedback loops.  Most likely this happens because you become part of the system.  Most people find it harder to reason about things from within.

    In software development, you complete the loop.  You write code, the compiler builds it, the OS runs it, you observe the result, and decide what to do to the code next.  The output of that system becomes the input to drive the next round.

    If you have heard the term before, you've probably also heard the term "tightening the feedback loop."  Whether or not you've heard it, what people mean by this is reducing the cycle time of the aforementioned system.  People throwing that term around look to streamline the write->build->run->write again process.

    A History of Developer Feedback Loops

    At the risk of sounding like a grizzled old codger, let me digress for a moment to talk about feedback loop history.  Long before my time came the punched card era.  Without belaboring the point, I'll say that this feedback loop would astound you, the modern software developer.

    Programmers would sit at key punch "kiosks", used to physically perforate forms (one mistake, and you'd start over).  They would then take these forms and have operators turn them into cards, stacks of which they would hold onto.  Next, they'd wait in line to feed these cards into the machines, which acted as a runtime interpreter.   Often, they would have to wait up to 24 hours to see the output of what they had done.

    Can you imagine?  Write a bit of code, then wait for 24 hours to see if it worked.  With a feedback loop this loose, you can bet that checking and re-checking steps received hyper-optimization.

    blog-developer-feedback-loop

    When I went to college and started my programming career, these days had long passed.  But that doesn't mean my early days didn't involve a good bit of downtime.  I can recall modifying C files in projects I worked, and then waiting up to an hour for the code to build and run, depending what I had changed.  xkcd immortalized this issue nearly 10 years ago, in one of its most popular comics.

    Today, you don't see this as much, though certainly, you could find some legacy codebases or juggernauts that took a while to build.  Tooling, technique, modern hardware and architectural approaches all combine to minimize this problem via tighter feedback loops.

    The Worst Feedback Loop

    I have a hypothesis.  I believe that a specific amount of time exists for each person that represents the absolute, least-optimal amount of time for work feedback.  For me, it's about 40 seconds.

    If I make some changes to something and see immediate results, then great.  Beyond immediacy, my impatience kicks in.  I stare at the thing, I tap impatiently, I might even hit it a little, knowing no good will come.  But after about 40 seconds, I simply switch my attention elsewhere.

    Now, if I know the wait time will be longer than 40 seconds, I may develop some plan.  I might pipeline my work, or carve out some other tasks with which I can be productive while waiting.  If for instance, I can get feedback on something every 10 minutes, I'll kick it off, do some household chores, periodically checking on it.

    But, at 40 seconds, it resides in some kind of middle limbo, preventing any semblance of productivity.  I kick it off and check twitter.  40 seconds turns into 5 minutes when someone posts a link to some cool astronomy site.  I check back, forget what I did, and then remember.  I try again and wait 40 seconds.  This time, I look at a Buzzfeed article and waste 10 minutes as that turns into 4 Buzzfeed articles.  I then hate myself.

    The Importance of Tightening

    Why do I offer this story about my most sub-optimal feedback period?  To demonstrate the importance of diligence in tightening the loop.  Wasting a few seconds while waiting hinders you.  But waiting enough seconds to distract you with other things slaughters your productivity.

    With software development, you can get into a state of what I've heard described as "flow."  In a state of flow, the feedback loop creates harmony in what you're doing.  You make adjustments, get quick feedback, feel encouraged and productive, which promotes more concentration, more feedback, and more productivity.  You discover a virtuous circle.

    But just the slightest dropoff in the loop pops that bubble.  And, another dropoff from there (e.g. to 40 seconds for me) can render you borderline-useless.  So much of your professional performance rides on keeping the loop tight.

    Tighten Your Loop Further

    Modern tooling offers so many options for you.  Many IDEs will perform speculative compilation or interpretation as you code, making builds much faster.  GUI components can be rendered as you work, allowing you to see changes in real time as you alter the markup.  Unit tests slice your code into discrete, separately evaluated components, and continuous testing tools provide pass/fail feedback as you type.  Static code analysis tools offer you code review as you work, rather than at some code review days later.  I could go on.

    The general idea here is that you should constantly seek ways to tune your day to day work.  Keep your eyes out for tools that speed up your feedback loop.  Read blogs and go to user groups.  Watch your coworkers for tips and tricks.  Claw, scratch, and grapple your way to shaving time off of your feedback loop.

    We've come a long way from punch cards and sword fights while code compiles.  But, in 10 or 30 years, we'll look back in amazement at how archaic our current techniques seem.  Put yourself at the forefront of that curve, and you'll distinguish yourself as a developer.

    Learn more how CodeIt.Right can tighten the feedback loop and improve your code quality.

    About the Author

    Erik Dietrich

    I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

    posted on Tuesday, 20 September 2016 07:37:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Wednesday, 24 August 2016

    In the world of programming, 15 years or so of professional experience makes me a grizzled veteran.  That certainly does not hold for the work force in general, but youth dominates our industry via the absolute explosion of demand for new programmers.  Given the tendency of developers to move around between projects and companies, 15 years have shown me a great deal of variety.

    Perhaps nothing has exemplified this variety more than the code review.  I've participated in code reviews that were grueling, depressing marathons.  On the flip side, I've participated in ones where I learned things that would prove valuable to my career.  And I've seen just about everything in between.

    Our industry has come to accept that peer review works.  In the book Code Complete, author Steve McConnell cites it, in some circumstance, as the single most effective technique for avoiding defects.  And, of course, it helps with knowledge transfer and learning.  But here's the rub -- implemented poorly, it can also do a lot of harm.

    Today, I'd like to make the case for the automated code review.  Let me be clear.  I do not view this as a replacement for any manual code review, but as a supplement and another tool in the tool chest.  But I will say that automated code review carries less risk than its manual counterpart of having negative consequences.

    The Politics

    I mentioned extremely productive code reviews.  For me, this occurred when working on a team with those I considered friends.  I solicited opinions, got earnest feedback, and learned.  It felt like a group of people working to get better, and that seemed to have no downside.

    But I've seen the opposite, too.  I've worked in environments where the air seemed politically charged and competitive.  Code reviews became religious wars, turf battles, and arguments over minutiae.  Morale dipped, and some people went out of their way to find ways not to participate.  Clearly no one would view this as a productive situation.

    With automated code review, no politics exist.  Your review tool is, of course, incapable of playing politics.  It simply carries out its mission on your behalf.  Automating parts of the code review process -- especially something relatively arbitrary such as coding standards compliance -- can give a team many fewer opportunities to posture and bicker.

    Learning May Be Easier

    As an interpersonal activity, code review carries some social risk.  If we make a silly mistake, we worry that our peers will think less of us.  This dynamic is mitigated in environments with a high trust factor, but it exists nonetheless.  In more toxic environments, it dominates.

    Having an automated code review tool creates an opportunity for consequence-free learning.  Just as the tool plays no politics, it offers no judgment.  It just provides feedback, quietly and anonymously.

    Even in teams with a supportive dynamic, shy or nervous folks may prefer this paradigm.  I'd imagine that anyone would, to an extent.  An automated code review tool points out mistakes via a fast feedback loop and offers consequence-free opportunity to correct them and learn.

    Catching Everything

    So far I've discussed ways to cut down on politics and soothe morale, but practical concerns also bear mentioning.  An automated code review tool necessarily lacks the judgment that a human has.  But it has more thoroughness.

    If your team only performs peer review as a check, it will certainly catch mistakes and design problems.  But will it catch all of them?  Or is it possible that you might miss one possible null dereference or an empty catch block?  If you automate the process, then the answer becomes "no, it is not possible."

    For the items in a code review that you can automate, you should, for the sake of thoroughness.

    Saving Resources and Effort

    Human code review requires time and resources.  The team must book a room, coordinate schedules, use a projector (presumably), and assemble in the same location.  Of course, allowing for remote, asynchronous code review mitigates this somewhat, but it can't eliminate the salary dollars spent on the activity.  However you slice it, code review represents an investment.

    In this sense, automating parts of the code review process has a straightforward business component.  Whenever possible and economical, save yourself manual labor through automation.

    When there are code quality and practice checks that can be done automatically, do them automatically.  And it might surprise you to learn just how many such things can be automated.

    Improbable as it may seem, I have sat in code reviews where people argued about whether or not a method would exhibit a runtime behavior, given certain inputs.  "Why not write a unit test with those inputs," I asked.  Nobody benefits from humans reasoning about something the build, the test suite, the compiler, or a static analysis tool could tell them automatically.

    Complimentary Approach

    As I've mentioned throughout this post, automated code review and manual code review do not directly compete.  Humans solve some problems better than machines, and vice-versa.  To achieve the best of all worlds, you need to create a complimentary code review approach.

    First, understand what can be automated, or, at least, develop a good working framework for guessing.  Coding standard compliance, for instance, is a no-brainer from an automation perspective.  You do not need to pay humans to figure out whether variable names are properly cased, so let a review tool do it for you.  You can learn more about the possibilities by simply downloading and trying out review and analysis tools.

    Secondly, socialize the tooling with the team so that they understand the distinction as well.  Encourage them not to waste time making a code review a matter of checking things off of a list.  Instead, manual code review should focus on architectural and practice considerations.  Could this class have fewer responsibilities?  Is the builder pattern a good fit here?  Are we concerned about too many dependencies?

    Finally, I'll offer the advice that you can use the balance between manual and automated review based on the team's morale.  Do they suffer from code review fatigue?  Have you noticed them sniping a lot?  If so, perhaps lean more heavily on automated review.  Otherwise, use the automated review tools simply to save time on things that can be automated.

    If you're currently not using any automated analysis tools, I cannot overstate how important it is that you check them out.  Our industry built itself entirely on the premise of automating time-consuming manual activities.  We need to eat our own dog food.

    Related resources

    Tools at your disposal

    SubMain offers CodeIt.Right that easily integrates into Visual Studio for flexible and intuitive automated code review solution that works real-time, on demand, at the source control check-in or as part of your build.

    Learn more how CodeIt.Right can help with automated code review and improve your code quality.

    About the Author

    Erik Dietrich

    I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

    posted on Wednesday, 24 August 2016 14:06:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Wednesday, 04 May 2016

    The Beta for CodeIt.Right v3 has arrived – the new major version of our automated code review and code quality analysis product. Here are the new version highlights:

    • Official support for VS2015 Update 2 and ASP.NET 5/ASP.NET Core 1.0 solutions
    • New Review Code commands:
      • only opened files
      • only checked out files
      • only files modified after specific date
    • Improved Profile Editor with advanced rule search and filtering
    • Improved look and feel for Violations Report and Editor violation markers
    • New rules
    • Setting to keep the OnDemand and Instant Review profiles in sync
    • New Jenkins integration plugin
    • Batch correction is now turned off by default
    • Most every CodeIt.Right action now can be assigned a keyboard shortcut
    • Preview of the new Dashboard feature

    For the complete and detailed list of the v3.0 changes see What's New in CodeIt.Right v3.0

    To give the v3.0 Beta a try, download it here - http://submain.com/download/codeit.right/beta/

    Please Note: while our early adopters indicate that the v3.0 Beta has been very stable for them, still, all the usual Beta software advisory provisions apply.

     

    New Review Code commands

    cir3-baseline-filtering

    We have renamed the Start Analysis menu to Review Code – still the same feature and the new name is just highlighting the automated code review nature of the product. The

    • Analyze Open Files command - analyze only the files opened in Visual Studio tabs
    • Analyze Checked Out Files command - analyze only files that that are checked out from the source control
    • Analyze Modified After – analyze only files that have been modified after specific date

    Known Beta issue – when pressed Update only updates the code review criteria but still requires to run the Review Code command manually. In the release version we will run code review when the Update is pressed.

     

    cir3-profile-filter

    Improved Profile Editor

    The Profile Editor now features

    • Advanced rule filtering by rule id, title, name, severity, scope, target, and programming language
    • Allows to quickly show only active, only inactive or all rules in the profile
    • Shows totals for the profile rules - total, active, and filtered
    • Improved adding rules with multiple categories

     

    Dashboard Preview

    While is not what we see it finally looking, an early preview of the Dashboard feature has been shipped with the Beta to give you a rough idea what we are after – provide you with a code quality dashboard view that you customize to your needs.

     

    Feedback

    We would love to hear your feedback on the new features! Please email it to us at support@submain.com or post in the CodeIt.Right v3 Beta Forum.

    .

    posted on Wednesday, 04 May 2016 06:31:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Tuesday, 24 February 2015
    If you didn't make it to the webinar, we recommend you watch the webinar recording first - the questions and answers below will make much more sense then.

    At the last month's webinar, "Asynchronous Programming Demystified" Stephen Cleary, Microsoft MVP, and author of "Concurrency in C# Cookbook" introduced the async and await keywords and describes how they work.

    During the webinar, there were a number of great questions asked from viewers that Stephen didn't have sufficient time to answer. In fact, there were 88 total questions. Fortunately, Stephen was kind enough to provide us with his answers below:

    Q: You showed us how to correctly use and call async methods. But how do I create an async API out of nothing?

    A: The low-level type for this is TaskCompletionSource, which allows you to complete a task manually. There are some higher-level wrappers as well, e.g., Task.Factory.FromAsync will take the old Begin/End style asynchronous methods and wrap them into a task.

    Q: Can we use Async inside LINQ methods (with lambda expressions)?

    A: LINQ is inherently synchronous, so there isn't much you can do asynchronously. E.g., you can use Select with an asynchronous delegate, but that gives you a sequence of tasks, and there isn't much you can do with them other than using something like Task.WhenAll. If you want an asynchronous sequence or stream abstraction, a better fit would be Reactive Extensions.

    Need Async Guidance?
    CodeIt.Right includes extensive Async Best Practices rule set that will guide you through the intricacies of Async. Start a no-cost 14-day trial of CodeIt.Right, SubMain's code quality analysis, automated code review and refactoring for Visual Studio.

    Q: What would be the best approach to implement 3rd party synchronous library/API into let's say our existing asynchronous API? Since we does want to maintain asynchronous should we wrap it into Task Run or something else?

    A: Answered in webinar

    Q: Does async await help with AJAX calls?

    A: Async can exist independently on the server and the client. You can use async on the client to help you call AJAX endpoints (i.e., call several of them concurrently). You can also use async on the server to help you implement AJAX endpoints.

    Q: Will try-catch around await keyword really catch all exceptions that can be raised within the called async method?

    A: Yes; an async method will always place its exceptions on the task it returns, and when you await that task, it will re-raise those exceptions, which can be caught by a regular try/catch.

    Q: Is it true that async method is not in fact started until either await, Wait or .Result is called for it?

    A: No. An async method starts when it is called. The await/Wait/Result will just wait for the method to complete.

    Q: We use MSMQ for a lot of our asynchronous WCF processing. It's heavy and expensive. Can async/await replace some if not all of the MSMQ processing?

    A: Async/await is not a direct replacement for any kind of queuing. You can use async to interact with the queue, though. The MessageQueue class unfortunately does not follow a standard asynchronous pattern, but you can use TaskCompletionSource to create await-compatible wrapper methods. The MSDN docs "Interop with Other Asynchronous Patterns and Types" under "Task-based Asynchronous Pattern" should get you started.

    Q: IAsyncResult fits very nicely with Windows low level and IOPorts. Does async/await have the same high performance?

    A: Answered in webinar

    Q: Can you explain when it is appropriate to use ConfigureAwait(false)?

    A: Anytime that the async method does not need its context, it should use ConfigureAwait(false). This is true for most library code.

    Q: Re. Task.Run() blocking a background thread... even using await will block a thread at some point surely?

    A: No, await does not block a thread. I have more details in my blog post "There Is No Thread".

    Q: Do you need to tweak machine/web config to get greater throughput for asynchrony?

    A: Answered in webinar

    Q: What about WhenAll?

    A: WhenAll can be used to concurrently execute multiple asynchronous operations.

    Q: What are the main problems using ContinueWith? There a lot of companies that have this type of implementation because of legacy code.

    A: ContinueWith is problematic for several reasons. For one, a single logical method must be broken up into several delegates, so the code is much more difficult to follow than a regular await. Another problem is that the defaults are not ideal; in particular, the default task scheduler is not TaskScheduler.Default as most developers assume - it is in fact TaskScheduler.Current. This unexpected task scheduler can cause issues like the one I describe in my blog post "StartNew Is Dangerous".

    Q: Why is button1_Click using the async keyword, when it is calling the async method?

    A: Any method that uses the await keyword must be marked async. Normally, I would make the method an "async Task" method, but since this is an event handler, it cannot return a task, so I must make it an "async void" method instead.

    Q: Are there any means to debug async code easily?

    A: VS2013 has pretty good support for debugging asynchronous code, and the tooling is continue to improve in this area. The one drawback to async debugging is that the call stack is not as useful. This is not a problem of async; we developers have gotten used to the idea that the call stack is a trace of how the program got to where it is - but that mental model is incorrect; the call stack is actually telling the program where to go next.I have an AsyncDiagnostics library that preserves "how the program got to where it is", which is sometimes helpful when trying to track down an issue.

    Q: In ASP.NET there are many queues. What will happen when system is overloaded, and we fulfill Async IO ports. Will it throw exception or will act it as it would without async?

    A: When the queues fill up, it will act the same. Async provides better scalability, but not infinite scalability. So you can still have requests timing out in the queues or being rejected if the queues fill up. Note that when the async request starts, it is removed from the queue, so async relieves pressure on the queues.

    Q: Lets say I have an WinForm app. with a method that renders some image that takes 60 secs for example. When the user presses the Begin button, I want to render to occur and later say "Finished" when done, without blocking during the meantime. Can you suggest a strategy?

    A: Answered in webinar

    Q: Is it acceptable to create asynchronous versions of synchronous methods by just calling the synchronous methods with Task.Run

    A: Answered in webinar

    Q: Is it really bad to wrap async code in sync code? I thought that is a very bad practice, but have seen OAuth packages wrapping async code in sync methods with some kind of TaskHelper eg. GetUser is internally using GetUserAsync

    A: The problem with library code is that sometimes you do want both asynchronous and synchronous APIs. But you don't want to duplicate your code base. It is possible to do sync-over-async in some scenarios, but it's dangerous. You have to be sure that your own code is always using ConfigureAwait(false), and you also have to be sure that any code your code calls also uses ConfigureAwait(false). (E.g., as of this writing, HttpClient does on most platforms but not all). If anyone ever forgets a single ConfigureAwait(false), then the sync-over-async code can cause a deadlock.

    Q: If you have large application with lots of different things to do with async how to handle the correct "flow"? So user will not use application in wrong way. Is there best practices for this?

    A: The approach I usually use is to just disable/enable buttons as I want them to be used. There is a more advanced system for UI management called Reactive UI (RxUI), but it has a higher learning curve.

    Async Guidance at your fingertips!
    CodeIt.Right includes extensive Async Best Practices rule set that will guide you through the intricacies of Async. Start a no-cost 14-day trial of CodeIt.Right, SubMain's code quality analysis, automated code review and refactoring for Visual Studio.

    Q: Is await produces managed code in .NET? Can we write unmanaged code within await/ async blocks?

    A: Await does produce managed (and safe) code. I believe unsafe code can be within an async method (though I've never tried it), but await cannot be used within an unsafe code block.

    Q: Any advice with use of DAL (sync with MSSQL) to use with async call? Use Task.Run or rewrite

    A: I'd recommend using the asynchronous support in EF6 to rewrite the DAL as purely asynchronous. But if you are in a situation where you need UI responsiveness and don't want to take the time to make it asynchronous, you can use Task.Run as a temporary workaround.

    Q: But you do want it for CPU bound code on client UIs (WPF, WinForms, Phone, etc.)

    A: Answered in webinar

    Q: When I am awaiting on several tasks, is it better to use WaitAll or WhenAll?

    A: WaitAll can cause deadlock issues if the tasks are asynchronous, just like Result and Wait do. So, I would recommend "await Task.WhenAll(...)" for asynchronous code.

    Q: You say await Task.Run(() => Method() is Ok to do... I'm assuming it's not best practice or just not the way Stephen uses? I guess is it a common or personal practice?

    A: Answered in webinar

    Q: Can you explain the Server Side Scalability benefit a little more?

    A: Answered in webinar

    Q: If there is a use case where i have to call async call from synchronous code, what is the best way to do that?

    A: "There is no good way to do sync-over-async that works in every scenario. There are only hacks, and there are some scenarios where no hack will work. So, for sure, the first and best approach is to make the calling code async; I have a blog post series on "async OOP" that covers ways to make it async even if it doesn't seem possible at first.

    If you absolutely must do sync-over-async, there are a few hacks available. You can block on the async code (e.g., Result); you can execute the async code on a thread pool thread and block on that (e.g., Task.Run(() => ...).Result); or you can do a nested message loop. These approaches are all described in Stephen Toub's blog post "Should I Expose Synchronous Wrappers for My Asynchronous Methods?"

    Q: Would "unit testing" be part of "Async Best Practices"? As in, would you be giving tips on best way to unit test in that future proposed webinar?

    A: Answered in webinar

    Q: What is the appropriate way to unit test an async method?

    A: Answered in webinar

    Q: The benefit : "Responsiveness on the client side" sounds like a background process. I thought async wasn't a background thing...

    A: Answered in webinar

    Q: I've read and heard often that another thread is not created. I'm struggling to understand how I/O is occurring without a thread managing it while the main thread is released. I comprehend how it gets back, i.e. an event of sorts picking up on the stack where it left off.

    A: I have a blog post "There Is No Thread" that explains this in detail.

    Q: When you implementing the IUserStore for the Identity, there are things that require you to implement a Task returning async method, however, I don't see any need to call async method. Task IUserStoreMethod(){ // no async stuff, but it requires a Task, and it cant be changed because it is from the interface. } How should I write the body? Is Task.Run() inside the method body an exception here?

    A: Normally, I/O is asynchronous. So "saving" a user is an inherently I/O-bound operation, and should be asynchronous if possible. If you truly have a synchronous implementation (e.g., saving the user in memory as part of a unit test), then you can implement the asynchronous method by using Task.FromResult.

    Q: Does Await spin a new thread under the hoods?

    A: Answered in webinar

    Q: What is the best way to call Async Methods from class constructors?

    A: Answered in webinar

    Q: Shouldn't the Click event handler be also renamed to ClickAsync?

    A: Answered in webinar

    Q: Is it possible to communicate progress from the async task?

    A: Yes. An asynchronous method can report progress by taking an IProgress parameter and calling its Report method. UI applications commonly use Progress as their implementation of IProgress. There's more information on MSDN under the "Task-based Asynchronous Pattern" topic.

    Q: How would unit/integration test code coverage influence designs and usage of async/await?

    A: Answered in webinar

    Q: So if my UI uses await/async to call a WebAPI method, the method itself has to be async or else it will be blocking correct?

    A: Answered in webinar

    Q: I have a project that interacts with SharePoint 2010 object model, so bound to .NET 3.5. Any caveats when using TPL for 3.5?

    A: .NET 3.5 is before the TPL was introduced (and well before async/await). There is an AsyncBridge project which attempts to back port the TPL and async support, but I haven't ever used it.

    Q: Can I use Async and await inside a sandboxed CRM Dynamics plugin?

    A: I don't know about Dynamics, sorry. But if they have support for .NET 4.5, I don't see why not.

    Q: How can, for example, the DownloadAsync method be canceled in a proper way from another UI action?

    A: Cancellation is done with the CancellationToken/CancellationTokenSource types in .NET. Usually, asynchronous methods just pass the CancellationToken through to whatever APIs they call. For more information, see the MSDN topics "Task-based Asynchronous Pattern" and "Cancellation in Managed Threads".

    Q: How to call an async method from a synchronous method or controller?

    A: Answered in webinar

    Q: Is .NET 4.5.1 the minimum for async / await?

    A: Answered in webinar

    Q: How do we do exception handling inside the DownloadAsync function?

    A: Answered in webinar

    Q: Can you explain how we can perform unit testing using these new keywords?

    A: Answered in webinar

    Q: Is async/await useful for WPF and Windows Form?

    A: Yes, async is useful in any UI scenario.

    Q: For Task Parallel and async/await which one we should use?

    A: The Task Parallel Library is great for CPU-bound code. Async is better for I/O-bound code.

    Q: If you got an normal MVC controller that returns a standard view... If that view contains AJAX code to fetch data from an async (WebAPI) controller, would the calling thread be blocked while the AJAX call is running? We have a situation at work where we cant switch page before the AJAX call is done... which seems a bit weird to me.

    A: Answered in webinar

    Q: When building async controllers/methods, is there some way to tell that the code is actually running asynchronous? How can I tell that the code is non blocking?

    A: Answered in webinar

    Need Async Guidance?
    CodeIt.Right includes extensive Async Best Practices rule set that will guide you through the intricacies of Async. Start a no-cost 14-day trial of CodeIt.Right, SubMain's code quality analysis, automated code review and refactoring for Visual Studio.
    posted on Tuesday, 24 February 2015 17:20:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Tuesday, 06 January 2015
    Recording of the webcast, slides and demo code have been posted to the website - watch it here
    Enjoy the recording, and please let us know how we can help!

    Featuring Stephen Cleary, Microsoft MVP

      Date: Wednesday, January 14th, 2015
      Time: 10:00 am PST / 1:00 pm EST

    Recording Available

    Asynchronous code using the new async and await keywords seems to be everywhere these days! These keywords are transforming the way programs are written. Yet many developers feel unsure about Async programming.

    Get demystified with Stephen Cleary, as he introduces the new keywords and describes how they work. Stephen is the author of "Concurrency in C# Cookbook" as well as several MSDN articles on asynchronous programming. Together, we'll cover:

    • How the async and await keywords really work
    • How to think about asynchronous code
    • The difference between asynchrony and parallelism
    • Common mistakes when learning asynchronous programming
    • Fixing Async code smells with CodeIt.Right

    If this time isn't convenient for you, register and we will send you the recording afterwards.

    Recording Available

    posted on Tuesday, 06 January 2015 05:50:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Tuesday, 28 October 2014

    A recording of the webcast and a copy of the slides have been posted to the web site - watch it here

    Enjoy the recording, and please let us know how we can help!

    Featuring Steve Smith - CTO, Falafel Software; Microsoft Regional Director; Microsoft MVP

      Date: Wednesday, November 12th, 2014
      Time: 10:00 am PST / 1:00 pm EST

    Recording Available

    Refactoring is a critical developer skill that helps keep code from collapsing under its own weight. Steve is the author of "Refactoring Fundamentals," available on Pluralsight, which covers the subject of code smells and refactoring in depth. This webinar will provide an introduction to the topics of code smells and refactoring, and should help you improve your existing code.

    Join Steve Smith as he shows some common code issues, and how to identify and refactor them with SubMain's CodeIt.Right code quality tool. In this webcast Steve will cover:

    • What are Code Smells
    • Principle of Least Surprise
    • Rules of Simple Design
    • Explain code smells like, Long Method, Large Class, Primitive Obsession, Data Clumps, Poor Names, Inappropriate Abstraction Level and more
    • Demo using CodeIt.Right to find and resolve code issues

    If this time isn't convenient for you, register and we will send you the recording afterwards.

    Recording Available

    posted on Tuesday, 28 October 2014 14:57:18 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Thursday, 25 September 2014

    CodeIt.Right v2.7 is a maintenance release that includes:

    • Support for VS2013 Update 3 and newer
    • Improved compatibility with Shared/Universal App projects
    • Exported Violation Report now includes profile name, severity threshold, version of CodeIt.Right and duration of the analysis
    • Exported Violation Report now includes information about Excluded Projects, Files, Rules and Violations
    • Command line version console output shows profile name as well as number of excluded projects, files, rules and violations
    • Other improvements and fixes

    For detailed list please see What's New in CodeIt.Right v2.7

    How do I try it?

    Download v2.7 at http://submain.com/download/codeit.right/

    posted on Thursday, 25 September 2014 05:03:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Thursday, 22 May 2014

    A recording of the webcast and a copy of the slides have been posted to the web site - watch it here

    Enjoy the recording, and please let us know how we can help!

      Featuring David McCarter, Microsoft MVP 

      Date: Tuesday, June 3rd, 2014
      Time: 10:00 am PST / 1:00 pm EST

    Recording Available

    Join David McCarter and Serge Baranovsky as they discuss Microsoft Design Guidelines, the most popular coding standard among C# and VB teams, provide recommendations, show how CodeIt.Right finds code issues, fixes them and provides painless experience when implementing Microsoft coding standards. In this webcast they will cover:

    • Benefits of coding standards
    • Microsoft Design Guidelines overview
    • Microsoft Guidelines category review and examples
    • Additional notes for VB developers
    • Demo using CodeIt.Right to find and resolve code issues
    • Bonus #1 - ASP.NET Security rules
    • Bonus #2 - Asynchronous programming best practice ruleset
    • Bonus #3 - Refactoring to Patterns

    If this time isn't convenient for you, register and we will send you the recording afterwards.

    Recording Available

    posted on Thursday, 22 May 2014 13:12:07 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Sunday, 04 May 2014

    CodeIt.Right v2.6 adds support for Shared Projects, introduces automated refractorings for the majority of StyleCop rules (when using StyleCop integration), performance improvements and fixes:

    • Support for Shared Project introduced in VS2013 Update 2
    • In v2.5 we added StyleCop integration into CodeIt.Right analysis. In v2.6 we are adding 93 auto-fix refactorings for StyleCop violations and currently covering automatic correction for 85% of StyleCop based violations (143 out of 164)
    • Improved performance of the built-in profiles by turning off few processing intensive optional rules. You can turn them back on by creating custom profiles
    • Tweaked a number of rules and instances for better conformance to Microsoft Design Guidelines
    • SuppressMessage improvements for local variables
    • Improvements and bug fixes

    For detailed list please see What's New in CodeIt.Right v2.6

    How do I try it?

    Download v2.6 at http://submain.com/download/codeit.right/

    posted on Sunday, 04 May 2014 19:58:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Tuesday, 18 March 2014

    A recording of the webcast and a copy of the slides have been posted to the web site - watch it here

    The companion ebook is available here - download Coding Standards in the Real World ebook.

    Enjoy the recording, and please let us know how we can help!


      Featuring David McCarter, Microsoft MVP 

      Date: Tuesday, March 25th
      Time: 10:00 am PST / 1:00 pm EST

    Recording Available

    While it is very important to follow a coding standard and best industry practices, it isn't always easy or straightforward. There are major long term benefits to be gained, but the hurdles of additional cost and human resistance to change must first be overcome. Join David McCarter and Serge Baranovsky as they discuss a tried and tested successful approach that will enable your team to implement and use an agreed coding standard with the least amount of conflict. In this webcast they will discuss:

    • Benefits of coding standards
    • Examples of good and bad coding practices
    • Challenges of implementing coding standards and why so many teams fail
    • Seven step approach for successful implementation
    • Your coding standards checklist
    • How CodeIt.Right helps

      Bonus:  webcast attendees will also receive SubMain's "Coding Standards in the Real World" ebook.

    One lucky person will win a dotNetDave prize package that includes "David McCarter's .NET Coding Standards" book (autographed) and "Rock Your Code" conference DVD.

    If this time isn't convenient for you, register and we will send you the recording afterwards.

    Recording Available

    posted on Tuesday, 18 March 2014 15:00:22 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Tuesday, 12 November 2013

    As an active member of the Microsoft Visual Studio Industry Partner (VSIP) program, we again are proud to be VS2013 sim-ship partner and shipping all editions of CodeIt.Right v2.5 today simultaneously with the release of Visual Studio 2013.

    This release includes official and complete support for Visual Studio 2013, new plugin to integrate StyleCop into CodeIt.Right analysis, updated look for Violations Report Export, performance improvements and fixes: codeit.right_v2.5_new_339x213

    • Official Visual Studio 2013 support
    • New plugin integrates StyleCop into CodeIt.Right analysis
      • Run StyleCop rules as part of CodeIt.Right Analysis
      • CodeIt.Right auto-corrections for StyleCop rules
      • Exclude StyleCop rules or violations
      • On Demand Analysis - include StyleCop violations into CodeIt.Right Violations Report
      • Instant Code Review - include StyleCop violations as part of CodeIt.Right Violations Report
      • Instant Code Review - show StyleCop violations in VS Editor and violations bar
      • Command Line - include StyleCop violations into the analysis
    • Revamped XSLT stylesheet for Violations Report Export to XML
    • Auto-corrections in Instant Code Review mode now show "growl" type warnings
    • CRDATA file format change - your existing CRDATA files will convert automatically
    • Async loading of saved violation reports for improved performance

    For detailed list please see What's New in CodeIt.Right v2.5

    How do I try it?

    Download v2.5 at http://submain.com/download/codeit.right/

    posted on Tuesday, 12 November 2013 23:30:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Tuesday, 16 April 2013
    CIR_Violations_ErrorList

    Today we released CodeIt.Right v2.2 - new version that adds introduces 23 new rules – Usage and Asynchronous programming best practices – as well as new feature allowing to output analysis results to Visual Studio Error List in addition or instead of CodeIt.Right Violations Report. Here is high level list of new features:

    • Added 12 new Async Best Practice Rules
      • Call Start on the Task object before instantiating
      • Async method should have "Async" suffix
      • Async method should return Task or Task<t>
      • Avoid "out" and "ref" parameters in async method
      • TAP method parameters should be the same
      • Do not create async Sub method
      • Transform simple async method to non-async
      • Async method should have await statement
      • Await statement method should be async
      • Do not use Task.Yield in async method
      • Do not use Task.Wait in async method
    • Added 11 new Usage Rules
      • Avoid empty methods
      • Avoid System.Console "Write()" or "WriteLine()"
      • Do not explicitly call "System.GC.Collect()" or "System.GC.Collect(int)"
      • Lock both when either set or get is locked for a property
      • Close database connections in "finally" block
      • Avoid control statements with empty bodies
      • Provide "default:" for each "switch" statement
      • Always provide names for threads
      • Avoid use of "new" keyword for hiding methods
      • Always close SQL resources
    • New Show violations in Error List feature- now OnDemand analysis results within Visual Studio can be rendered into Violations Report, VS Error List or both
    • and more

    CodeIt.Right v2.2 has many more features and improvements. For detailed list please see What’s New in CodeIt.Right v2.2

    How do I try it?

    Download the CodeIt.Right v2.2 at http://submain.com/download/codeit.right/

    posted on Tuesday, 16 April 2013 22:01:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Thursday, 05 April 2012

    We posted earlier that on 2/29 when Microsoft announced the Visual Studio 11 Beta, SubMain was one of the first partners with CodeIt.Right product that provided full support for the new VS Beta. We currently are running private Beta program for our GhostDoc product that is also VS11 Beta compatible.

    We are happy to announce that SubMain is one of the partners to sim-ship (Simultaneously Ship) our products together with the Visual Studio 11 RTM! You will be able to enjoy fully compatible version of our products when you install Visual Studio 11 on the release day!

    In the meantime you are welcome to test drive CodeIt.Right and GhostDoc together with Visual Studio 11 Beta:

    Get-VS-11-Beta

    posted on Thursday, 05 April 2012 06:49:00 (Pacific Standard Time, UTC-08:00)    #    Comments [2]   
     Wednesday, 04 April 2012

    We are excited to announce the release of CodeIt.Right v2.0 - new major version that takes our code quality product to the whole new level. This version introduces new major feature – Instant Code Review – that enables developers to get code quality feedback in real time as they type and refactor code smells right at their introduction. We also added new Personal Edition of the product. Here is the high level new feature list:

    • cir_20_ir_2Instant Code Review feature – get code quality feedback as you code and refactor on the spot!
    • Visual Studio 11 support
    • Multiple categories for a rule
    • Multi-select in Violations Report
    • XAML Parser
    • 8 new Silverlight/WPF/XAML rules
    • 27 new ASP.NET/Security rules
    • Profile Editor - filter for selected/unselected rules
    • Option to require comments when excluding code issue
    • and more
    cir_20_ir_1

    Focus on coding – we will help you with quality

    If you love the CodeIt.Right code quality rules and auto-corrections but want immediate feedback as you code, the new Instant Review feature is for you!

    Instant Review allows to run select set of rules in the background and get real-time code issues feedback to developers in the Visual Studio Editor. The feature highlights in the editor code elements that triggered violation and shows complete list of file code issues in the right violations bar (next to the scrollbar). Violation detail window explains the nature of the issue, offers auto-refactoring options and option to ignore (exclude) the violation. The feature can be turned on/off with a single click, supports multiple user configurable profiles that can be switched in the toolbar or right margin violations bar context menu.

    cir_20_multiselectPersonal Edition

    Starting version 2.0 in addition to Standard and Enterprise editions we are offering new Personal Edition designed for solo developers and freelancers. This edition is priced appropriately for personal use – this is the most affordable edition of CodeIt.Right. With introduction of the new edition, price of the Standard has been adjusted.

    Is that it?

    CodeIt.Right v2.0 has many more features and improvements. For detailed list please see What’s New in CodeIt.Right v2.0

    How do I try it?

    Download the CodeIt.Right v2.0 at http://submain.com/download/codeit.right/

    Note to current users – we have changed licensing schema in v2.0 and your v1.x license codes won’t work with v2.0. For users whose Software Assurance is up-to-date we will be sending v2.x license codes shortly. Users without subscription and those whose subscription lapsed will have the opportunity to purchase new version at the upgrade price.

    Note to current Standard Edition users – in version 2.0 we have added "Standard" edition name to all folder locations (Program Files, My Documents, etc) and registry keys. When you install v2.0 Beta you will need to copy your custom profiles and rules into the new folders.

    posted on Wednesday, 04 April 2012 20:46:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Wednesday, 29 February 2012

    As part of the Visual Studio Industry Partner (VSIP) program we have released VS11 Beta compatible version of CodeIt.Right on the ComponentSource website that hosts VS11 Beta Bundle products. And we are very proud that our flagship product is one of the very first VS11 compatible products available as part of the VS11 Beta Bundle on the day of VS11 Beta launch – February 29, 2012!

    Please follow the instructions below to receive the virtual bundle of CodeIt.Right and Visual Studio 11 Beta:

    1. Go to the Visual Studio 11 Beta download site to get the newest version of Visual Studio
    2. After installing Visual Studio 11 Beta, download CodeIt.Right unpack the zip file and run the installer

    component_source_codeit.right

    posted on Wednesday, 29 February 2012 12:56:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Friday, 25 November 2011

    The BETA for CodeIt.Right has arrived and this is our new major version of the code quality product:

    • cir_20_ir_2Instant Code Review feature - get code quality feedback as you code and refactor on the spot!
    • Visual Studio 11 support
    • Multiple categories for a rule
    • Multi-select in Violations Report
    • XAML Parser
    • 8 new Silverlight/WPF/XAML rules
    • 27 new ASP.NET/Security rules
    • Profile Editor - filter for selected/unselected rules
    • Option to require exclude comments
    • and more
    cir_20_ir_1

    Focus on coding - we will help you with quality

    If you love the CodeIt.Right code quality rules and auto-corrections but want an immediate feedback as you code, the new Instant Code Review feature is for you!

    Instant Review allows to run select set of rules in the background and get real-time code issues feedback to developers in the Visual Studio Editor. The feature highlights code elements that raised violation in the editor and shows complete list of file code issues in the right violations bar (next to the scrollbar). Violation detail window explains the nature of the issue, offers auto-refactoring options and option to ignore (exclude) the violation. The feature can be turned on/off with a single click, supports multiple user configurable profiles that can be switched in the toolbar or right margin violations bar context menu.

    cir_20_multiselectIs that it?

    CodeIt.Right v2.0 has many more features and improvements. For detailed list please see What's New in CodeIt.Right v2.0

    How do I try it?

    Download the BETA at http://submain.com/download/codeit.right/beta

    Where do I post feedback?

    Please post your v2.0 Beta feedback in the CodeIt.Right v2.0 Beta forum

    Note to current Standard Edition users - in version 2.0 we have added "Standard" edition name to all folder locations (Program Files, My Documents, etc) and registry keys. When you install v2.0 Beta you will need to copy your custom profiles and rules into the new folders.

    Note to all Beta users: Even though CodeIt.Right v2.0 Beta is stable it's Beta nevertheless. Proceed with care.

    posted on Friday, 25 November 2011 20:05:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Friday, 10 December 2010
    Holiday Offer

    SubMain is proud to spread the holiday cheer by offering customers best savings of the year with free 1 year of Software Assurance and Support, free lifetime License Protection and great savings on multiple license purchases.

    Get Software Assurance and Support absolutely free!

    From now until January 7, purchase any new CodeIt.Right licenses and receive 1 year of Software Assurance and Support subscription at no additional cost. Normally priced at $100 per user ($150 per user for Enterprise Edition), Software Assurance annual subscription plan is the most cost effective and convenient way to stay current with the latest versions of our products and get priority support when you need it.

    Plus free License Protection!

    If you purchase just released GhostDoc Pro, you will receive free License Protection and secure GhostDoc Pro updates and new versions at no charge for the product lifetime.

    GhostDoc Pro Edition is enhanced version of the product that gives users complete control over your XML Comment content and layout as well as automates XML Comment generation via batch actions.

    Get it today! No hurdles! No hoops!

    It is easy to get your free software assurance subscription and license protection. Just place your order on the SubMain website by January 7, and we will automatically give you the free year of software assurance and support subscription. If your purchase includes GhostDoc, you will automatically receive lifetime license protection for free too. No hassle. No headaches.

    But wait, there is more!

    On the top of the free offers above, you can save even more when you buy multiple CodeIt.Right licenses:

    • Save 20% on CodeIt.Right Standard Edition 5 license pack

    • Save 30% on CodeIt.Right Enterprise Edition 10 license pack

    Email us at sales@submain.com for information on the the 5 and 10 pack discounts.

    Need more info about SubMain products?

    Learn more about SubMain products

    Ready to get your holiday bonus?

    Buy now for the holiday offer

    Still have questions?

    No problem! Contact the SubMain sales department via email sales@submain.com

     

    posted on Friday, 10 December 2010 01:15:02 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Wednesday, 21 April 2010
    We published two updates this morning:
    • CodeIt.Right v1.9.10111 - both Standard and Enterprise editions. This is a minor update, primarily bug fixes. Not a required update unless you are experiencing some rule issues or unhandled errors this update might fix.
    • New setup for TestMatrix - we have created a brand new setup for TestMatrix to replace the often confusing "silent" version. New setup experience is very much the same as all our other products. Version hasn't of the product hasn't changed.

    posted on Wednesday, 21 April 2010 11:19:51 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Tuesday, 06 April 2010

    We are added 2 new community contributions to our Tutorials page.

    Paulo Morgado posted a great template for CodeIt.Right Code File Header correction action that automatically generates file header compliant with StyleCop Rules. You can find the template source in Paulo's blog post CodeIt.Right Code File Header Template For StyleCop Rules

    Craig Sutherland has done a great job integrating CodeIt.Right with CruiseControl.NET.

    Here is CodeIt.Right violations report in CC.NET - great to see Craig took advantage of the Severity Threshold feature and implemented filtering to reduce "noise" in the report very much like we have it in CodeIt.Right:

    ccnet_cir_violations

    CC.NET CodeIt.Right Analysis Summary report screenshot:

    ccnet_cir_summary

    Thank you, Paulo and Craig! Great work!

    posted on Tuesday, 06 April 2010 19:46:44 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Tuesday, 09 March 2010
    As announced earlier today we just closed the acquisition of the popular unit testing and code coverage product TestMatrix, as well as CodeSpell and StudioTools. We, SubMain, will continue to maintain and enhance these products.

    TestMatrix adds support for unit testing, code coverage analysis, and test profiling to Visual Studio, seamlessly incorporating these critical development practices directly into the coding process itself; CodeSpell adds real-time, intelligent detection and correction of misspellings to Visual Studio; and StudioTools is a rich collection of Visual Studio enhancements.

    We are also announcing today the availability of the new version for all three products - TestMatrix, CodeSpell and StudioTools - v2.1.10055 which adds support for Visual Studio 2010 RC.

    We are very excited about taking over the future of such great products! TestMatrix complements CodeIt.Right into a Code Quality Suite which will be be complete with the addition of the new product codenamed Project Anelare in the following months.

    For more on the agreement, please see the press release.

    posted on Tuesday, 09 March 2010 00:03:29 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Thursday, 25 February 2010

    Today we released update for CodeIt.Right Standard and Enterprise Editions - v1.9.10053. As part of our 2010 Product Roadmap, this version changes include significantly improved performance, support for GlobalSuppressions, new rules and bug fixes.

    New in CodeIt.Right v1.9.10053:

    • Major performance improvements throughout the rule base
    • Added support for GlobalSuppressions - syntax is the same as the VSTS GlobalSuppressions file. This new feature is supported in both Standard and Enterprise editions.
    • SuppressMessage attribute for class now applied to all member of the class (see example at the bottom of the post)
    • New Rules:
      • Avoid the Page.DataBind method (AspNet)
      • Avoid the DataBinder.Eval method (AspNet)
      • Do not use SaveAs method to store uploaded files (AspNet)
      • Always define a global error handler (AspNet)
      • Do not disable custom errors (AspNet)
      • Avoid setting the AutoPostBack property to True (AspNet)
      • Interface methods should be callable by child types (Design)
      • Remove unused parameters (Usage)
    • Corrected download redirect link in the Enterprise Edition when new version is available. In the past versions it would incorrectly open the Standard Edition download page
    • Fixed Encapsulate Field refactoring issue
    • Other fixes for the reported issues

    The following code sample on how class level SuppressMessage attribute excludes TypesAndMembersShouldHaveXmlComments rule for the class and all its members (bullet #3 above):

    [SuppressMessage("SubMain.CodeItRight.Rules.General", "GE00005:TypesAndMembersShouldHaveXmlComments")]
    public class MyUndocumentedClass
    {
        public void MyUndocumentedMethod1{}
        public void MyUndocumentedMethod2{}
    }

    This update is free for all users who are current on their Software Assurance and Support Subscription

     

    posted on Thursday, 25 February 2010 16:31:09 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Tuesday, 23 February 2010
    Roadmap_426x282

    Code Quality developer tools is the direction we've been following since the introduction of CodeIt.Right and we are taking this commitment to the next level in 2010 with two new products and new features for our existing products.  One of the new products to be released in 2010 will assist in unit testing, code coverage and test code profiling; the second new product will be complementary to CodeIt.Right.  All three products together will comprise our new Code Quality Suite.  Additionally, we will continue to keep up with the Visual Studio 2010 release schedule and have all of our products 2010 compatible when VS2010 is RTM.

    Here is what we are planning for 2010:

    • New product!

      • Coming March 2010:  we are adding to our product line by offering a unit test runner and code coverage product.

    • New product!

      • Project Anelare (code name) - we will provide details on this project as we get closer to a public preview.  At this point we can share that this will be product complementary to CodeIt.Right - together they will encompass our code quality package.

    • VS2010 support

      • For all products - most of our products are compatible with VS2010 RC, and we will be VS2010 RTM compatible by the time it RTMs.

    • CodeIt.Right

      • Optimized rule library performance:  the new version will be released the first week in March!

      • Community Rule Valuation & Review: we are pioneering "social" in code analysis by enabling the community to rate rules and provide feedback; as well as leverage the community feedback, best uses and best practices for each rule.

      • NEW Rules - with emphasis on security, FxCop/StyleCop parity, SharePoint, WPF & Silverlight rules.

      • (EE) Trend Analysis: monitor code quality improvements over time.

      • (EE) Integration with manual code review tools.

      • Global Suppressions:  adding support for GlobalSuppressions and extending syntax of the SuppressMessage attribute for more flexible in-code exclusions.

      • Multi-select in the violations list.

      • Copy Rule feature:  clone and change rule instance configuration

      • Command line enhancements: open command line/build violations output in Visual Studio for correction

      • Annotation: for excludes and corrections

      • XAML support:  enables building Silverlight and WPF specific rules

      • Profile Wizard:  quick start no-brainer user/project profile based on the project type, importance, community valuation, favorite food, etc

    • GhostDoc

      • We are currently prioritizing the feature set for the new version of GhostDoc. If you have a feature request that you have not submitted yet, share them with us in the GhostDoc forum.

    Stay tuned to our blog for more details about our progress!

    posted on Tuesday, 23 February 2010 09:13:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Tuesday, 22 December 2009

    New version of CodeIt.Right Enterprise Edition - build 1.9.09355 - is available for download now.

    This version introduces new feature Merge Profiles that allows to compare and merge user configured profiles, enhances VSTS integration with the new "Add WorkItem" feature, adds command line version parameters and ability to load/unload CodeIt.Right in the menu and Add-In Manager.

    MergeProfiles

    This update is free for all users who are current on their Software Assurance and Gold Support Subscription

    New in CodeIt.Right Enterprise v1.9:

    • New Merge Profiles feature allows to compare and merge user configured profiles
    • New "Add WorkItem" feature - create TFS WorkItem from a violation 
    • Added /metrics parameter to the command line utility to generate XML output for three metrics reports – "Member", "Type" and "Code"
    • Added /sendto parameter to the command line utility - send the violation/metrics output via email
    • CodeIt.Right can now be loaded/unloaded in the menu and Add-In Manager
    • Build server setup doesn't require Visual Studio on the build machine anymore 
    • "Built-in profile" option now is not selectable in the Analysis Module when one or more custom profiles deployed via Team Configuration Module
    • New Rules:
      • Specify CultureInfo (Globalization)
      • Specify IFormatProvider (Globalization)
      • Specify StringComparison (Globalization)
      • Avoid excessive complexity (Maintainability)
      • Avoid excessive inheritance (Maintainability)
      • Do not use deprecated properties of Response object (AspNet)
    • Fixed a number of bugs in the application and the rules...
    posted on Tuesday, 22 December 2009 22:55:00 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Wednesday, 28 October 2009

    Today we are releasing new version of CodeIt.Right - build 1.8.09300.

    This version features complete ASP.NET support, it is compatible with VS2010 Beta 2 (in addition to VS2008, VS2005, VS2003), Add Rule dialog improvements - "Hide Rules in the Profile" and "Quick Search", 10+ new rules including new category "CodingStyle".

    Those of you who are using CodeIt.Right with very large solutions may know of the memory limitation issue for the tools that live in the Visual Studio address space, aka "Out of Memory" exceptions. You will be excited to know we have addressed the issue in this version by introducing new "Memory Optimization" mode. The "Performance mode" is still on by default since it offers a little faster analysis but CodeIt.Right will offer to switch to the "Memory Optimization" option when you open a large solution or get the dreaded "Out of Memory" exception. You also are free to switch between the options manually.

    Another change we made - we removed Sealed modifier for all built-in rules. This gives you even easier way to extend and customize existing rules by simply overriding just the methods you wanted changed. See Tutorial: Extending Existing Rules for a sample.

    This update is free for all users who are current on their Software Assurance and Gold Support Subscription

    New in CodeIt.Right v1.8:

    • Now compatible with VS2010 Beta 2, VS2008, VS2005 and VS2003
    • Finally complete ASP.NET support including refactorings in HTML markup.
    • Added Memory Optimization mode – allows to minimize the memory use at slightly lower reduces performance. This allows to solve the “Out of Memory” issue on large solutions.
    • Added Performance tab to the Options window – Best Performance/Memory Optimization.
    • Added new Exclude tab and moved all exclude tabs there
    • All Exclude tabs now support multi-select
    • New option to Exclude a Project
    • Added new “Add” button in Exclude Rule, Exclude File and Exclude Project tabs – the dialogs support multi-select.
    • Profile Editor -> Add Rule dialog has new option Hide Rules in the Profile which removes from the selection the rules that already exist in the selected profile.
    • Profile Editor -> Add Rule dialog has new Quick Search that filters the list for the rules that contain entered substring
      AddRule-QuickSearch
    • Added RuleID to rule help documentation.
    • Product license codes when entered are now activated on the SubMain server.
    • Auto Update wizard now shows Software Assurance & Gold Support subscription expiration date.
    • About window now shows Software Assurance & Gold Support subscription expiration date.
    • All web services – Auto Update, Error Reporting, Statistics and Activation – are now accessed over SSL protected HTTPS connection.
    • Removed Sealed modifier for all built-in rules. This allows for new easier way to extend and customize existing rules by simply overriding specific methods. See Tutorial: Extending Existing Rules for a sample.
    • Addressed issues related to incorrect source file encoding when Byte Order Mark (BOM) attribute was missing.
    • New Rules:
      • Abstract class should have at least one derive class (Design)
      • Interface should have at least one implementation (Design)
      • Project should have AssemblyInfo file (Design)
      • Do not place assembly attributes outside of AssemblyInfo file (Design)
      • Do not include multiple statements on a single line (CodingStyle)
      • Avoid single line If statement (CodingStyle)
      • Do not check for empty strings using Equals (Performance)
      • XML Comments should be spelled correctly (Spelling)
      • Avoid non-public fields in ComVisible value types (Interoperability)
      • Avoid static members in ComVisible types (Interoperability)
      • PInvokes should not be visible (Interoperability)
    • Fixed a number of bugs in the application and the rules...

    Download

    Download version 1.8.09300 here - http://submain.com/download/codeit.right 

    Technorati Tags: , , , , ,

    posted on Wednesday, 28 October 2009 00:57:12 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Monday, 01 June 2009

    As announced earlier today we just closed the acquisition of the popular XML Comment helper tool GhostDoc. We, SubMain, will continue to evolve the tool and distribute it as a free product.

    We are also announcing today the availability of the new version of GhostDoc v2.5.09150 which improves the user setup experience, adds support for Visual Studio 2010 Beta 1 and full support for Visual Basic.

    Additionally, today we are making available a new version of CodeIt.Right (v1.6.09151) that adds the IntelliComment feature based on the GhostDoc algorithm and offers improved and automated generation of XML Comments.

    We are very excited about taking over the future of an excellent tool such as GhostDoc! We are committed to maintaining this wonderful free tool and we welcome the community feedback and suggestions.

    For more on the agreement, please see the press release and interview with Serge Baranovsky and Roland Weigelt - What's in the shop for GhostDoc?

    New in GhostDoc v2.5:
    • Compatible with VS2010
    • Support for VB - GhostDoc now has full support for VB
      • Removed "Enable experimental support for VB" option in Settings.
    • Improved product setup experience
      • Single setup for all supported versions of Visual Studio - VS2005, VS2008 and VS2010.
      • Setup will detect older version installed and automatically uninstall it.
    • Converted from VS Add-In to VS Package
    • Resolved installation issues related to the VS Add-In model - by converting to VS Package
    Download

    Download GhostDoc v2.5.09150 here - http://submain.com/download/ghostdoc 

    Download CodeIt.Right v1.6.09151 here - http://submain.com/download/codeit.right 

    posted on Monday, 01 June 2009 14:25:56 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Tuesday, 19 May 2009
    Click for a full-sized image

    As you may know yesterday Microsoft released VS2010 Beta 1 to MSDN Subscribers. The Beta will also be publicly available for the rest of the world on Wednesday. This is long expected and very exciting new version of Visual Studio; and we here at SubMain are fully prepared to support the new shiny version!

    While some companies make a big deal posting screenshots of the upcoming VS2010 versions and announcing availability of previews for their products within a month, we at SubMain have been working hard to give our customers a new fully VS2010 compatible version on the day of VS2010 Beta 1 release!

    So ... today, within the 24 hours of the VS2010 Beta 1 availability, you can download new version of CodeIt.Right that runs in VS2010 (as well as VS2008, VS2005, VS2003) and fully understands the syntax changes in C# 4.0 and VB10! This is just of the many new features that come with this release - you can see the long list below.

    With the VS2010's multi-monitor support and CodeIt.Right allows for even better performance as you keep violations up on one monitor while review source code and the changes on the other monitor.

    Another great feature added in this version is the template based rules. We currently support T4 templates. The template based rules are simpler and more flexible alternative to writing custom rules using CodeIt.Right SDK.

    Click for a full-sized image

    Over the next couple of weeks we will be adding a tutorial on how to use and customize T4 templates in CodeIt.Right. Ping us if you want this sooner.

    Give it a shot and let us know what you think!

    This update is free for all users who are current on their Software Assurance and Gold Support Subscription

    New in CodeIt.Right v1.6:

    • Compatible with VS2010
    • C# 4.0 and VB10 syntax support
      • Automatically Implemented Properties (VB.NET)
      • Generic Variance (VB.NET)
      • Multi-line lambda expressions that can contain statements (VB.NET)
      • Implicit line continuation (VB.NET)
      • Dynamic lookup (C#) - (the "dynamic" type)
      • Named and Optional parameters (C#)
      • Covariance and contravariance (C#)
    • Added T4 template based rules
      • Profile Editor supports editing and validation of T4 templates
      • Rule "Externally visible types and members should have XML comments" has been rewritten as a template based rule and is customizable now
      • Added Global Properties tab in Options - user configured properties to be used with the T4 templates
    • Spell-checking rules
      • Significantly improved performance, now suggested spellings lookup is only performed when the the Correction Options dropdown is clicked
      • Spell-checking rules - improved performance when use a secondary (non-English) dictionary
      • Spell-checking rules - renamed en_US.usr to complang.usr
    • Further improved ASP.NET support - rename refactoring now also corrects the ASP.NET page HTML markup server tag IDs and attributes
    • Context menus Check All, Clear All and Correct Checked are now context specific when clicked on file or project lines in the violations list
    • Improved performance for Check All, Clear All, Exclude Rule and Exclude File 
    • Analyze Project and Analyze File context menus in the violations list, see forum post 
    • Analyze Folder and Analyze Project context menus in the Solution Explorer
    • Synchronization of the file selected in the Solution Explorer with the violations list
    • About dialog - added subscription expiration date
    • Added "Don't show this exception again" checkbox to the unhandled exceptions dialog
    • Default Encapsulate Field correction for rules "Do not declare externally visible instance fields" and "Secured types should not expose fields" changed from "Create new property and update all references where the field is use" to "Rename the field, create new property with the original field name and do not update the field references" (for the public name has not changed), see forum post
    • Added new property ExcludeList to all Naming rules, see forum post 
    • Added new RuleTargets - Solution, Project, File
    • New Rules:
      1. Use constants where appropriate (Performance)
      2. Remove unused internal classes (Performance)
      3. Do not initialize unnecessarily (Performance)
      4. Source file name should be Pascal cased (General)
      5. Source file should contain only one public type (General)
      6. Source file name should match public type name (General)
      7. Enable Treat Compiler Warnings As Errors option (General)
      8. Enforce Warning Level 4 (General)
      9. Source file should have a header (General) - a T4 template based rule
    • more bug fixes

    Known issues

    We are releasing the product with one known issue this time for it is very minor.

    • Under VS2010 Beta 1 CodeIt.Right rule documentation links, like "More", "Tell me more..." don't work. This feature depends on the Visual Studio offline help module which was not shipped with VS2010 Beta 1. You still can use Rule Help in the Profile Editor. Rule documentation help works fine in VS2008, VS2005 and VS2003.

    Download

    Download version 1.6.09139 here - http://submain.com/download/codeit.right  

    Technorati Tags: , , , , ,


    posted on Tuesday, 19 May 2009 09:08:21 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Monday, 29 December 2008

    With just a couple of days left this year, I wanted to share with you a great article in the MSDN Magazine Toolbox column this month on Improving Software Quality with Static Code Analysis Tools where MS MVP Scott Mitchell is reviewing Static Analysis Tools For .NET. Scott is comparing FxCop, StyleCop and CodeIt.Right:

    While FxCop and StyleCop pinpoint rule violations, the developer is still responsible for implementing these tools' suggestions. CodeIt.Right from SubMain takes static code analysis to the next level by enabling rule violations to be automatically refactored into conforming code.

    Like FxCop, CodeIt.Right ships with an extensive set of predefined rules, based on the design guidelines document mentioned earlier, with the ability to add custom rules. But CodeIt.Right makes it much easier to create and use custom rules.

    CodeIt.Right's biggest benefit is the automatic code refactoring.

    Scott summarizes:

    Static code analysis tools provide a fast, automated way to ensure that your source code adheres to predefined design and style guidelines. Following such guidelines helps produce more uniform code and also can point out potential security, performance, interoperability, and globalization shortcomings.

    Thank you, Scott!

    Visit MSDN Magazine web site for the complete article.

    What is your experience with CodeIt.Right, please tell us. And, as always, ask questions!

    Happy Holidays!

    Technorati Tags: , , , , ,

    posted on Monday, 29 December 2008 16:08:44 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Monday, 22 December 2008

    Just before the holiday we are releasing new version of CodeIt.Right - build 1.2.08357. Download version 1.2.08357 today!

    This release features addition of spell checking engine to CodeIt.Right SDK, workaround for the Out of Memory error on large solutions, 12 new rules and many bugs fixed.

    Please share your feedback in the forums especially on the new spell checking rules - we are still working and improving them.

    Note: The new spelling rules are included with the built-in profile but they are turned off by default - you will need to turn them on. For all rules you will need to add them into your custom profiles.

    This update is free for all users who are current on their Software Assurance and Gold Support Subscription

    Major changes in CodeIt.Right v1.2 :

    • New feature - Spell Checker, we extended SDK with spell checking engine which enables us to add new rules. This build includes en-US, en-GB, en-CA, en-AU, fr-FR, es-ES, es-MX dictionaries. You can add other languages using OpenOffice format dictionaries. We cannot distribute de-DE and it-IT dictionaries for they are GNU GPL licensed but you can download them from OpenOffice web site here as well as several dozen of other languages.
    • Added - Out Of Memory error handling - shows dialog that links to the page with recommendations on How to avoid Out Of Memory exceptions in Visual Studio by enabling is to use up to 3GB of virtual memory
    • Added - "Copy Violation" in the violation list allows to copy the highlighted violation information into clipboard. Alternatively use Ctrl+C.
    • "Exclude File" violation list context menu option is now enabled when right-clicked on the file name row
    • Added - Cache Folder Path option - allows to move CodeIt.Right cache directory (default - User\Application Data)
    • New Rules:
      1. Identifiers Should Be Spelled Correctly (Spelling)
      2. Comments Should Be Spelled Correctly (Spelling)
      3. Type link demands require inheritance demands (Security)
      4. Secured types should not expose fields (Security)
      5. Secure serialization constructors (Security)
      6. Review visible event handlers (Security)
      7. Review deny and permit only usage (Security)
      8. Review declarative security on value types (Security)
      9. Method security should be a superset of type (Security)
      10. Array fields should not be read only (Security)
      11. Aptca methods should only call aptca methods (Security)
      12. Wrap vulnerable finally clauses in outer try (Security)
    • CodeItRight.Cmd.exe changes:
    • many bugs fixed

    Download build 1.2.08357 here - http://submain.com/download/codeit.right  

    Technorati Tags: , , , , ,

    posted on Monday, 22 December 2008 01:26:40 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Wednesday, 22 October 2008

    by Serge Baranovsky

    The best and the biggest MS developer conference Microsoft PDC 2008 edition is just few days away. We, of course, wouldn't miss the great opportunity to meet and network with potentially 10000 developers.

    This year we will be giving away free CodeIt.Right licenses for those who manage to spot me in the crowd and talk to me about your favorite CodeIt.Right feature, what you don't like about the product or help to prioritize the upcoming features. Or talk about what's important to you, what's on you mind, or just a geek talk.

    To help you find me I will share that I'm attending Palermo Party Sunday night and hanging around most every party, event, session and between the sessions. You can follow me on Twitter , email me at sergeb@submain.com or call our toll free line 1 (800) 936-2134 and choose extension 3 - "Talk to Serge" (special PDC option :)

    If you have never met in person, perhaps picture on my Twitter account - http://twitter.com/sergeb - will help you :)

    Looking forward to meeting you all in LA next week!

    Technorati Tags: , , , , ,

    posted on Wednesday, 22 October 2008 14:08:55 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Friday, 19 September 2008

    New version of CodeIt.Right is out! You can download build 1.1.08262 today.

    This is a major release even though we only added ".1" to the version. It includes .NET 3.5 support, Guidelines document template generation, option to analyze single file, enhanced SDK documentation, new rules, new command line version parameters and more. Please share your feedback in the forums.

    This update is free for all users who are current on their Software Assurance and Gold Support Subscription

    Major changes in CodeIt.Right v1.1 :

    • Support for .NET 3.5 (C# 3.0 and VB 9.0)
    • New feature - Generate Template in Profile Editor allows to generate guidelines document template based on the profile. Please note the XSL stylesheet only works with IE (not Firefox)
    • New feature - Analyze File in the Solution Explorer context menu - enables individual source code file analysis
    • New feature - Severity Threshold dropdown - allows to quickly filter violations by severity
    • Improved Profile Editor layout
    • Added "Rule Info" tab in Profile Editor
    • Pivot View allows to save your custom views (Save As)
    • Updated SDK documentation - now includes most every class, method and property of CodeIt.Right SDK. We will update the Online SDK documentation promptly.
    • Auto-Update now also includes the latest help file
    • New Options -> General -> Max violations to report (default Unlimited) to limit to the first N violations reported
    • New Rules:
      1. Do not declare read only mutable reference types (Security)
      2. Seal methods that satisfy private interfaces (Security)
      3. Secure GetObjectData overrides (Security)
      4. Assemblies should declare minimum security (Security)
      5. Override link demands should be identical to base (Security)
      6. Prefix member calls with self (Usage)
      7. Do not prefix calls with Base unless needed (Usage)
      8. Review suppress unmanaged code security usage (Security)
      9. Do not indirectly expose methods with link demands (Security)
      10. Security transparent code should not assert (Security)
      11. Secure asserts (Security)
      12. Code region name should be PascalCased (Naming)
    • CodeItRight.Cmd.exe improvements:
      • Added MsBuild and Nant tasks (SubMain.CodeItRight.MSBuild.dll and
        SubMain.CodeItRight.NAntBuild.dll)
      • Fixed /Quiet option
      • Added /OutXSL parameter
      • Added /CRData parameter
      • Added /severityThreshold parameter
      • Improved formatting of the output XML data file
      • Added return error codes
      • Removed key press required in the older version
      • see CodeItRight.Cmd.exe Command Line Options for details
    • Removed Alt shortcuts in the toolbar - they were conflicting with some of the Visual Studio
    • Renaming a parameter now also updates its name in XML comments
    • Added CodeRegion class in the SDK
    • Added "Expression" and "Code Region" to Rule targets
    • many bugs fixed

    Download build 1.1.08262 here - http://submain.com/download/codeit.right  

    Technorati Tags: , , , , ,

    posted on Friday, 19 September 2008 15:55:47 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Tuesday, 22 July 2008

    by Serge Baranovsky

    New version of CodeIt.Right is almost out. We have a beta build 1.1.08198 for you today. Please download and give it a try, let us know what works and what doesn't. This is pretty big release that includes .NET 3.5 support, Guidelines document template generation, new command line version parameters and more. Please share your feedback in the forums.

    If you get an unhandled error dialog, please enter your email address so we could let you know when we resolved the issue or if we need to contact you for more details in our troubleshooting.

    For the .NET 3.5 support we have made major changes to our parsing engine - please report all issues you encounter with the Beta.

    Major changes in the v1.1 Beta:

    • Support for .NET 3.5 (C# 3.0 and VB 9.0)
    • New feature - Generate Template in Profile Editor allows to generate guidelines document template based on the profile. Please note the XSL stylesheet only works with IE (not Firefox)
    • Improved Profile Editor layout
    • Added "Rule Info" tab in Profile Editor
    • Pivot View allows to save your custom views (Save As)
    • Updated SDK documentation - now includes most every class, method and property of CodeIt.Right SDK
    • Auto-Update now also includes the latest help file
    • New Options -> General -> Max violations to report (default 1000) to limit to the first N violations reported
    • New Rules:
      1. Do not declare read only mutable reference types (Security)
      2. Seal methods that satisfy private interfaces (Security)
      3. Secure GetObjectData overrides (Security)
      4. Assemblies should declare minimum security (Security)
      5. Override link demands should be identical to base (Security)
      6. Prefix member calls with self (Usage)
      7. Do not prefix calls with Base unless needed (Usage)
    • CodeItRight.Cmd.exe improvements:
      • Added MsBuild and Nant tasks (SubMain.CodeItRight.MSBuild.dll and
        SubMain.CodeItRight.NAntBuild.dll)
      • Fixed /Quiet option
      • Added /OutXSL parameter
      • Added /CRData parameter
      • Improved formatting of the output XML data file
      • Added return error codes
      • Removed key press required in the older version
      • see CodeItRight.Cmd.exe Command Line Options for details
    • many bugs fixed

    Download build 1.1.08198 here - http://submain.com/download.aspx?product=codeit.right-beta 

    Note: Even though 1.1.08198 is a stable Beta it's Beta nevertheless. Proceed with care.

    Technorati Tags: , , , , ,

    posted on Tuesday, 22 July 2008 14:31:18 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Monday, 12 May 2008

    by Serge Baranovsky

    As I mentioned earlier I presented CodeIt.Right in Portland at PADNUG last week. I had great time, thank you all guys and girls - you are wonderful and very friendly audience! I'm glad I got to know better some of you during the "informal" meeting at Gustav's after the presentation.

    Few of you asked if the slides will be available, so I'm published PDF version of the slides on our community page - PADNUG CodeIt.Right Presentation Slides. If you need the PPT version - just drop me a note at the email address on the first page of the slides.

    Thanks again, it was great visiting Portland!

    Technorati Tags: , , , , ,

    posted on Monday, 12 May 2008 17:13:19 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Friday, 02 May 2008

    by Serge Baranovsky

    v1.1 progress & Enterprise Edition

    The v1.1 release that includes .NET 3.5 syntax has been slightly delayed. There are two reasons for that - we had too much fun at the MVP Summit mid-April and we made few significant changes to the parsing and refactoring engines to support the 3.5 syntax. The coding part is done, we cover all of the .NET 3.5 now. And we are going through a major testing phase as the base application layer responsible for all .NET versions was affected. We are currently targeting end of May for the v1.1 release date.

    Meanwhile, the Enterprise Edition of CodeIt.Right is pretty much ready - we should have beta in a couple of weeks. Enterprise Edition has two parts - developer client and Profile Authoring/Admin piece. The latter, in addition to the Profile Editor features, has the ability to push profiles to developer workstations and limit developers to use only the profiles that were published by a Team Lead/Lead Developer/Architect.

    Tutorials

    We have published CodeIt.Right SDK Online Documentation and a couple of new tutorials:

    All tutorials are now listed on the new dedicated Tutorials page.

    Presentation

    I will be speaking in Protland at PADNUG this Tuesday, May 6th at 6:30pm - more details here - http://www.padnug.org/padnug/meetings.aspx?ID=145

    If you are in the area and want to hear about coding guidelines, best practices, ask questions and see a demo of CodeIt.Right, come join us. I will bring some swag for giveaway, free licenses and discount codes.

    Technorati Tags: , , , , ,

    posted on Friday, 02 May 2008 16:16:56 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Saturday, 12 April 2008

    by Serge Baranovsky

    This CodeIt.Right rule update includes fixes as well as 3 new security rules.

    New rules:

    • Do not declare read only mutable reference types (Security)
    • Seal methods that satisfy private interfaces (Security)
    • Secure GetObjectData overrides (Security)

    These rules are included with the default profile and you will only need to follow the wizard to merge them into your custom profiles.

    Fixes:

    • Rule "Do not override operator equals on reference types" incorrectly triggered violation for the "Equals" method. Now this rules only reports violation for the operator "==" override.
    • Fixed "If the type is a generic type, CIR adds the apostrophe-count string to the class name for the deserialization constructor"
    • Fixed "If there is no explicit default parameterless constructor, adding the deserialization constructor causes errors in all derived classes that called the parameterless constructor"

    As a reminder, this set of rules is distributed using the Rule AutoUpdate feature which triggers 15 minutes after you start Visual Studio. If you turned the feature off, you can manually start the update wizard from the CodeIt.Right/Help & Support/Update Rules menu.

    Technorati Tags: , , , , ,

    posted on Saturday, 12 April 2008 11:59:05 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Sunday, 23 March 2008

    by Serge Baranovsky

    Since the v1.0 release a number of improvements were requested for use CodeIt.Right in an automated build process and we are making a refresh for CodeItRight.Cmd.exe available today. This version of CodeItRight.Cmd will be included as part of the v1.1 release which additionally will ship with ready to use tasks for MSBuild and NAnt.

    See CodeItRight.Cmd.exe Command Line Options for complete list of the console version command line switches and error codes.

    Important:

    • This copy of the console version of CodeIt.Right will only work with the original v1.0 release (build 1.0.08035)
    • It will only work with the VS2005 and VS2008 version of CodeIt.Right. If you need VS2003 version, please contact support.

    You can download the new improved  here

    Instructions:

    • Extract SubMain.CodeItRight.Cmd.exe from the downloaded zip file
    • Save it to CodeIt.Right program directory, typically Program Files\SubMain\CodeIt.Right

    Changes in this build of CodeItRight.Cmd.exe:

    • Fixed /Quiet option
    • Removed key press required in the older version after it is done analyzing solution
    • Added /OutXSL parameter 
    • Added /CRData parameter
    • Improved output XML data formatting 
    • Added return error codes

    See CodeItRight.Cmd.exe Command Line Options for complete list of the console version command line switches and error codes.

    posted on Sunday, 23 March 2008 01:00:44 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Sunday, 09 March 2008

    by Serge Baranovsky

    Mike McIntyre posted great review of CodeIt.Right on DevCity.NET - http://www.devcity.net/Articles/348/CodeIt.Right.Review.aspx - Mike has put CodeIt.Right through a test using 10 of his projects.

    Mike summarizes:

    I feel CodeIt.Right's features for configurable static code analysis and its ability to automatically correct my code make it well worth the purchase price. It has become and indispensable addition to my developer toolkit.

    I highly recommend you give it a try.

    Thank you Mike!

    What is your experience with CodeIt.Right? Feel free to ask questions, tell us what you think!

    Technorati Tags: , , , , ,

    posted on Sunday, 09 March 2008 23:53:50 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Thursday, 21 February 2008

    by Serge Baranovsky

    CodeIt.Right is back to regular price of $250/user and we also introduced volume license discounts.

    For limited time when you purchase CodeIt.Right license we include complementary 1 year of Software Assurance & Gold Support Subscription - more information about the subscription - (regular - only 3 months of Software Assurance only included with additional $100 per year subscription). And, as always, you are covered by our 60-day money-back guarantee.

    » Buy CodeIt.Right Now «

    Technorati Tags: , , , , ,

    posted on Thursday, 21 February 2008 02:20:12 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Tuesday, 05 February 2008

    by Serge Baranovsky

    CodeIt.Right will be available at the introductory price of $180 through February 20th, 2008 (regular price $250/user after Feb 20th). To make it even better the introductory price also includes complementary 1 year of Software Assurance & Gold Support Subscription - more info about the subscription - (regular - only 3 months included with additional $100 per year). And, as always, you are covered by our 60-day money-back guarantee.

    February 20th is only two weeks away, so, hurry, download CodeIt.Right, play with it, learn it, ask questions you have and buy it when you are ready.

    I will see in you in the forums, I will be the one serving beer and answering questions :)

    Buy CodeIt.Right Now

    Technorati Tags: , , , , ,

    posted on Tuesday, 05 February 2008 23:51:21 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   

    by Serge Baranovsky

    CodeIt.Right is finally finished after about 3 years in the making. That's right, CodeIt.Right is Released! It is out in its all new shiny package :)

    I would like to make a pause here and extend my deepest gratitude towards everyone who helped make this release possible. From the SubMain development team, the advisory board members, to everyone who participated in the community and contributed feedback over the year since we released the first public beta.

    CodeIt.Right, my 7 years long dream come true. The tool it out! Cheers! (I truly believe that code analysis coupled with automatic refactoring will change the way .NET developer teams and solo developers work!)

    With the touchy-feely stuff out of the way, let's get back to the actual product, shall we? :)

    If you are new to CodeIt.Right:

    What's next?

    This not a road map per se, just highlights of where we are heading with CodeIt.Right over the next months:

    • We will keep publishing new rules as they are developed and will push them to you using the Auto-Update feature
    • We will publish more tutorials and how tos on using the product and developing your own custom rules using CodeIt.Right SDK
    • Create community section over at http://community.submain.com and allow custom developed rules shared with other users
    • Version v1.1 is coming in 4-6 weeks - .NET 3.5 syntax, merging profiles, Pivot View improvements, generating team guidelines document template from profile, and of course, more rules!
    • Version v2.0 is preliminarily scheduled for summer 2008 and will introduce VSTS integration and manual refactorings (we will merge CodeIt.Once into CodeIt.Right)

    So, don't wait, go ahead, download CodeIt.Right - http://submain.com/download/codeit.right - play with it, explore the rules included in the box, get out of the box and try developing your own custom rules, share them, ask questions, tell us what you think!

    Technorati Tags: , , , , ,

    posted on Tuesday, 05 February 2008 19:50:33 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Monday, 28 January 2008

    by Serge Baranovsky

    Another CodeIt.Right rule update. Next stop, release of CodeIt.Right, is targeted at February 1. Yes, that's the end of this week, better hurry to take advantage of the pre-release pricing ;)

    New CodeIt.Right rule posted:

    • Use prefix for return type  (Naming)
    • Avoid prefix for return type (Naming)
    • Use prefix for derived type (Naming)
    • Avoid prefix for derived type (Naming)

    The four rules above will not be included into the default profile - see How to add rule to User Profile to learn how to add these into your custom profile.

    This set of rules is distributed using the Rule AutoUpdate feature added into the Beta 2 of CodeIt.Right. Auto Update triggers in 15 minutes after you start Visual Studio. If you turned the feature off you can manually start the update wizard from the CodeIt.Right/Help & Support/Update Rules menu.

    Technorati Tags: , , , , ,

    posted on Monday, 28 January 2008 14:57:15 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Wednesday, 16 January 2008

    by Serge Baranovsky

    Here we have another smallish update for CodeIt.Right - build 1.0.08005. 

    Early next week we publish another couple of new really cool rules. We will use rule Auto-Update to push them to you.

    You can see the product is getting very stable now and we only do minor modifications. Which means, guess what? That we are releasing soon :)

    So, make sure you take advantage of our pre-release CodeIt.Right license pricing while it's not too late ;)

    Changes since the last build

    • Enabled the "Enter Registration Code..." menu item. Those of you who already purchased CodeIt.Right license (thank you!) can now activate the product. 
    • Fixed issue with with XML comments rule with enum members all declared on the same line.
    • Fixed problem in the UseOrAvoidCertainPrefixes rule - see forum post
    • Addressed a couple of bugs in the refactoring engine.
    • Few other minor fixes.

    Download build 1.0.08005 here - http://submain.com/download/codeit.right

    For more information on CodeIt.Right, getting started presentation, support and feedback see Beta announce post.

    Technorati Tags: , , , , ,

     

    posted on Wednesday, 16 January 2008 02:11:49 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Tuesday, 25 December 2007

    by Serge Baranovsky

    As the RC is available we are releasing CodeIt.Right in the next 3 weeks. 

    We decided to offer the product for purchase at a pre-release price with an extra bonus. Buy CodeIt.Right here. This is the full license we just give you an opportunity to get it at a lower price.

    Pre-release price

    Pre-release price $250 $150 (includes complementary 1 year of Software Assurance & Gold Support subscription)

    Buy CodeIt.Right today and save!

    Post-release price

    User license - $250 (includes 3 month Software Assurance (Gold Support subscription to be purchased separately), $100 per year after that)

    Annual Software Assurance & Gold Support - $100 :

    • Upgrade to new versions at no additional cost (regardless of price changes)
    • Auto Update for the latest rule sets
    • Access to private Gold Support forums

    Buying with us is safe - with our 60-day money-back guarantee if you are not happy with the product.

    Technorati Tags: , , , , ,

    posted on Tuesday, 25 December 2007 02:02:44 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Friday, 21 December 2007

    by Serge Baranovsky

    Release is just around the corner - CodeIt.Right Release Candidate build 1.0.07355 is available now. This build also includes CodeIt.Right SDK help file. Please download the Release Candidate and share your feedback in the forums.

    Major changes overview since the last Beta build

    • Added support for VS2008 - currently only supports .NET 2.0 solutions in the VS2008 IDE
    • Included CodeIt.Right SDK help file - installs into the SubMain/CodeIt.Right/Help directory. Also available separately in the Community Download section
    • Linked to SDK Reference help file from the main CodeIt.Right Help
    • Added Proxy settings to the Options dialog (Options/Proxy)
    • Addressed lockup issue when CodeIt.Right would freeze VS2005 running under Windows Vista
    • Added option to turn off anonymous rule usage statistics reporting (Options/Other)
    • Added option to exclude regions from analysis (Options/Exclude Regions). Predefined regions are
      • Web Form Designer Generated Code
      • Web Services Designer Generated Code
      • Windows Form Designer Generated Code
      • Component Designer Generated Code
      • Assembly Attribute Accessors
      • My.Settings Auto-Save Functionality
      • COM GUIDs
    • Improved the rules auto-update function
    • Fixed number of issues in the refactoring engine
    • Fixed - when new rules with configurable properties added to a profile, the Editor will warn if the properties are not populated on the rule Save action
    • Fixed Add Rule dialog in Profile Editor now saves sort order
    • Other minor changes and bug fixes.

    Download build 1.0.07355 here - http://submain.com/download/codeit.right

    For more information on CodeIt.Right, getting started presentation, support and feedback see Beta announce post.

    Technorati Tags: , , , , ,


    posted on Friday, 21 December 2007 04:28:36 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Monday, 03 December 2007

    by Serge Baranovsky

    New CodeIt.Right rule posted:

    • Externally visible types and members should have XML comments (General)

    This rule makes sure all Public and Protected members and types have XML documentation comment.

    AutoCorrect option for this rule is 'Add XML comment template' which will add

    (for VB)

        ''' <summary>
        '''     
        ''' </summary>
        ''' <value>
        '''     <para>
        '''         
        '''     </para>
        ''' </value>
        ''' <remarks>
        '''     
        ''' </remarks>

    (for C#)

        /// <summary>
        ///     
        /// </summary>
        /// <value>
        ///     <para>
        ///         
        ///     </para>
        /// </value>
        /// <remarks>
        ///     
        /// </remarks>

    (actual content of the template will depend of the code element the XML template are being added to).

    This set of rules is distributed using the Rule AutoUpdate feature added into the Beta 2 of CodeIt.Right. Auto Update triggers in 15 minutes after you start Visual Studio. If you turned the feature off you can manually start the update wizard from the CodeIt.Right/Help & Support/Update Rules menu.

    Don't forget to leave your feedback in the CodeIt.Right forum http://community.submain.com/forums/4/ShowForum.aspx

    (Note: if you skip the custom profile update step in the Rules Update Wizard, you still can add new rules to your custom profile(s) using the Add Rule button in the Profile Editor - you will find recent rules by sorting the date column)

    Technorati Tags: , , , , ,

    posted on Monday, 03 December 2007 02:26:36 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Monday, 19 November 2007

    by Serge Baranovsky

    Another refresh for CodeIt.Right Beta build 1.0.07322. Please download the latest build and to let us know what you think on the forums.

    Major changes overview since the last build

    • Changed setup directory for VS2005 version to "CodeIt.Right" - used to be "CideIt.Right for VS2005"
    • Toolbar and most of the menu is now enabled even when you don't have any solution/projects open
    • Changed file format of Rule Update engine to .ZIP (from .DLL) to avoid firewall issues.
    • Added new rules (more to come after Beta 2)
      1. AssembliesShouldHaveValidStrongNames (Design)
      2. Remove unused private methods (Performance)
      3. PropertiesShouldNotReturnArrays (Performance)
    • Added date sort order in the "Add Rule" dialog. 
    • Fixed bug not allowing to add rule to profile if rule belongs to a custom Category
    • Fixed issue with "Solution Folders"
    • Fixed bug with CompilerGenerated/GeneratedCode attributes - now analysis is skipped inside the element marked with CompilerGenerated or GeneratedCode attribute
    • In Custom Rules for "ModificationDate" now use the modification date of the rule assembly
    • many smaller changes and bug fixes.

    (Note for current Beta users: to see the new rules you will need to switch back to the built-in profile or add them to your custom profile(s))

    Download build 1.0.07322 here - http://submain.com/download/codeit.right

    For more information on CodeIt.Right, getting started presentation, support and feedback see Beta 1 announce post.

    Technorati Tags: , , , , ,

    posted on Monday, 19 November 2007 02:20:33 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Friday, 02 November 2007

    by Serge Baranovsky

    We have updated CodeIt.Right Flash presentations based on the Beta 2 layout - they are covering new features as well - http://submain.com/tutorials

    Quick Start - Quick 1 minute CodeIt.Right walk through

    Introduction to CodeIt.Right Features - Here is where we show main CodeIt.Right functions and options

    ISerializable Pattern example - One of many scenarios when CodeIt.Right helps to diagnose issues early and implement coding patterns correctly.

    Please let us know in the CodeIt.Right Discussion Forum if these are helpful and how you think we can improve the presentations.

    Click here to read about CodeIt.Right

    (Just a heads up - there is CodeIt.Right update coming up in the next 2 weeks)


     


    Technorati Tags: , , , , ,

    posted on Friday, 02 November 2007 10:32:47 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Wednesday, 17 October 2007

    by Serge Baranovsky

    New set of CodeIt.Right rules:

    • Avoid unsealed attributes (Performance)
    • COM visible types should be creatable (Interoperability)
    • Pointers should not be visible (Security)
    • Remove empty finalizers (Performance)

    (All of the new rules above offer AutoCorrect options)

    This set of rules is distributed using the Rule AutoUpdate feature added into the Beta 2 of CodeIt.Right. Auto Update triggers in 15 minutes after you start Visual Studio. If you turned the feature off you can manually start the update wizard from the CodeIt.Right/Help & Support/Update Rules menu.

    Another set of rules will be distributed with new build of CodeIt.Right next week as some of them require updated version of the SDK.

    Please leave your feedback how much you like/dislike the AutoUpdate feature, your suggestions - in the CodeIt.Right forum http://community.submain.com/forums/4/ShowForum.aspx

    (Note: if you skip the custom profile update step in the Rules Update Wizard, you still can add new rules to your custom profile(s) using the Add Rule button in the Profile Editor - you will find recent rules by sorting the date column)

    For more information on CodeIt.Right, getting started presentation, support and feedback see Beta 1 announce post.

    Technorati Tags: , , , , ,

    posted on Wednesday, 17 October 2007 00:59:59 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Monday, 08 October 2007

    by Serge Baranovsky

    We published new CodeIt.Right rules:

    • Mark assemblies with assembly version (Design)
    • Mark assemblies with CLSCompliant (Design)
    • Mark assemblies with ComVisible (Design)
    • Remove unused locals (Performance)

    (All of the new rules above offer AutoCorrect options)

    This is the first set of rules that we distribute using the Rule AutoUpdate feature recently added into the Beta 2 of CodeIt.Right. Auto Update triggers in 15 minutes after you start Visual Studio. If you turned the feature off you can manually start the update wizard from the CodeIt.Right/Help & Support/Update Rules menu.

    The AutoUpdate feature is brand new - please leave your feedback how much you like/dislike it, what steps are not intuitive, what would you improve - in the CodeIt.Right forum http://community.submain.com/forums/4/ShowForum.aspx

    (Note: if you skip the custom profile update step in the Rules Update Wizard, you still can add new rules to your custom profile(s) using the Add Rule button in the Profile Editor - you will find recent rules by sorting the date column)

    For more information on CodeIt.Right, getting started presentation, support and feedback see Beta 1 announce post.

    Technorati Tags: , , , , ,

    posted on Monday, 08 October 2007 16:17:36 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Wednesday, 26 September 2007

    by Serge Baranovsky

    CodeIt.Right Beta 2 is out now - major update since the Beta 1.

    Our team is eager to hear your feedback on the Beta 2 version, as you can really influence the next stage of development. Be sure download the Beta 2 and to let us know what you think on the forums, and have your chance to win 1 of 3 $100 Amazon gift certificates to be awarded to the top three posts by October 31st.

    Major changes overview since the last Beta 1 build

    • Improved analysis performance
    • Help file covers most rules
    • Added new rules (more to come after Beta 2)
      1. DoNotHideBaseClassMethods
      2. AvoidLongTypeArgumentsForVB6Clients
      3. AptcaTypesShouldExtendAptcaBaseTypes
    • Added Rule Update mechanism - allows to notify of new rules published on SubMain site, download, install and update custom profiles.
      New post-Beta 2 rules will be distributed this way.  

    • Added Pivot View

    • Revamped Correction Progress dialog with Report, Export and Undo features

     

    •  Lots of smaller changes and tons of bug fixes.

    (Note for current Beta users: to see the new rules you will need to switch back to the built-in profile or add them to your custom profile(s))

    Download build Beta 2 build 1.0.07268 here - http://submain.com/download/codeit.right

    For more information on CodeIt.Right, getting started presentation, support and feedback see Beta 1 announce post.

    Technorati Tags: , , , , ,

    posted on Wednesday, 26 September 2007 00:00:05 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Wednesday, 04 July 2007

    by Serge Baranovsky

    New pre-Beta 2 build (1.0.07172) of CodeIt.Right is published - command line version, XML report, exclude attributes, FxCop rule mapping, many bugs fixed - these make it a major update. Give it a try and tell us what you think.

    (Note for current Beta users: to see the new rules you will need to switch back to the built-in profile or add them to your custom profile(s))

    Changes in build 1.0.07172:

    • ADDED: Command line version SubMain.CodeItRight.Cmd.exe - yes, we support Continuous Integration now!
    • ADDED: Export to XML
    • ADDED: XSL template for XML report
    • ADDED: Custom attributes to exclude rules or rule categories from analysis
    • ADDED: Rule mapping to FxCop rules and support for existing FxCop/MS Code Analysis SuppressMessage attributes.
    • CHANGED: Rule Designer renamed into Profile Editor
    • REMOVED: CreationDate from IRule interface
    • ADDED: "ReplaceIdentifierPrefix" rule - allows to replace existing prefixes (e.g. replace "m_" or "g_" with "_")
    • ADDED: "ReplaceIdentifierSuffix" rule - very similar to the replace prefix rule above
    • FIXED: Issue with line # pointing at the beginning of the structure and not at the actual violation line
    • FIXED: Problem loading ASP.NET web sites via HTTP
    • CHANGED: Add Rule dialog - now includes rule modification date column
    • IMPROVED: Undo/Redo perfomance
    • other fixes

    We will be posting soon brief info on SuppressMessage attribute support and using the command line version.

    Download build 1.0.07172 here - http://submain.com/download/codeit.right

    For more information on CodeIt.Right, getting started presentation, support and feedback see Beta announce post.

     

    Technorati Tags: , , , , ,

    posted on Wednesday, 04 July 2007 00:45:38 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Wednesday, 11 April 2007

    by Serge Baranovsky

    New Beta build (1.0.07100) of CodeIt.Right is available - added new rule category 'Exception Handling' with 3 new rules, revamped toolbar and fixed whole a lot of bugs reported in the last month. Try it out 

    (Note for current Beta users: to see the new Exception Handling rules you will need to switch back to the built-in profile or add them to your custom profile(s))

    Next stop - new version of PrettyCode.Print for .NET to be released late this month.

    Changes in build 1.0.07100:

    • REMOVED: "Stop Analysis" button in toolbar and menu
    • CHANGED: "Start Analysis" toolbar button - replaced icon with text
    • CHANGED: Moved built-in profile into a separate resource DLL - SubMain.CodeItRight.Rules.Default.dll 
    • FIXED: Drawing issue for marker box 
    • ADDED: New rule category - "Exception Handling"
    • ADDED: New rule "DoNotRaiseSpecifiedExceptionTypes" with correct action "Change type of exception to specified type"
    • ADDED: New rule "DoNotCatchSpecifiedExceptionTypes" with correct action "Change type of exception to specified type"
    • ADDED: New rule "DoNotHandleNonCLSCompliantExceptions" with correct action "Catch specific exception using parameter catch block"
    • UPDATED: Help file - with new rules and category information
    • other fixes

    With over 2 dozen bugs fixed (not listed individually above) and 3 new exception handling rules this is significant and more stable Beta build.

    Download build 1.0.07100 here - http://submain.com/download/codeit.right

    For more information on CodeIt.Right, getting started presentation, support and feedback see Beta announce post.

     

    Technorati Tags: , , , , ,

    posted on Wednesday, 11 April 2007 00:18:59 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Thursday, 08 March 2007

    by Serge Baranovsky

    We published new Beta build of CodeIt.Right - added new rule, a number of UI improvements and bug fixes. Give it a try.

    We will be rather quiet for two weeks - Misha is traveling up here to Seattle for the Microsoft MVP Global Summit. We both will be busy with the Summit and events around it. Misha is very excited about the trip - this is his first time in Seattle and United States. We both will take few days off and go ski at Crystal Mount. We've worked hard getting CodeIt.Right Beta out and certainly going to enjoy the time off - just to rejuvenate before we jump on the next set of improvements :)

    That doesn't mean we will go incommunicado - there will be no new CodeIt.Right builds for two weeks that's all. We are leaving two developers in the shop and Stuart as usually will be on the top of the support issues. I will make sure I follow up on support cases at least every other day.

    Changes in build 1.0.07066:

    • ADDED: Progress window for parsing code, loading references and running code analysis
    • ADDED: Progress window the Correct Checked operation
    • FIXED: Disabled Naming rules for ASPX type names
    • FIXED: Issue with concurrent access when caching references
    • FIXED: Resolved issues when CodeIt.Right installed under an Andministator account and used under a User or Restricted User account
    • ADDED: SkipOnCheckAll propery to the IRule interface (for Custom Rules)
    • ADDED: Text description to most of the toolbar buttons
    • IMPROVED: Unhandled exception logging
    • FIXED: "NullReferenceException" when the "Show Analysis Window" toolbar button clicked
    • ADDED: Separate tree branch in New Project/New Item for CodeIt.Right Custom Rule Library and Wizard in VS2003 and VS2005
    • CHANGED: Excluded modifier "Extern" in most rules in built-in profile
    • ADDED: New rule "DoNotRethrowExceptionsExplicitly"
    • FIXED: Bug in the "DoNotRaiseReservedExceptionTypes" rule
    • other fixes

    Download build 1.0.07066 here - http://submain.com/download/codeit.right

    For more information on CodeIt.Right, getting started presentation, support and feedback see Beta announce post.

     

    Technorati Tags: , , , , ,

    posted on Thursday, 08 March 2007 02:53:02 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Saturday, 24 February 2007

    by Serge Baranovsky

    Here we have the new Beta build of CodeIt.Right available for download. There are no new rules added in this build but we polished quite a few existing rules, added new features and fixed bunch of issue.

    Change list:

    • IMPROVED: Excluding rules/violation now does not require re-analysing the whole solution
    • ADDED: "Show Analysis Window" - brings up CodeIt.Right window if it is lost in the stack of opened code windows in the VS IDE
    • IMPROVED: CodeIt.Right now will work correctly even under restricted user account
    • FIXED: Issue with "IdentifiersShouldDifferByMoreThanCase" rule
    • ADDED: Added support for "Solution Folders" in VS2005 web projects
    • IMPROVED: In Rule Designer when editing a built-in profile it prompts now to create a new profile
    • FIXED: Setup preserves Visual Studio settings and toolbar customization
    • FIXED: Various setup improvement
    • ADDED: Total count to CodeIt.Right window tab captions for Violations/Excluded Violations/Excluded Rules/Excluded Files
    • CHANGED: Merged error and reference log files into one
    • CHANGED: Location for log file, profiles, settings to My Documents
    • CHANGED: Location for reference cache to User Application Data
    • ADDED: "Show Analysis Window", "Check All", "Clear All", "Correct Checked" to CodeIt.Right menu
    • IMPROVED: "Do not declare external visible fields" rule - added option to create public Property (Pascal Cased) if possible
    • CHANGED: Setup copies 3rd party assemblies to GAC and own assemblies to private assemblies directory
    • CHANGED: Violation report file changed from *.cirdata to *.crdata
    • other fixes

    Build 1.0.07055 can be downloaded here - http://submain.com/download/codeit.right

    For more information on getting started with CodeIt.Right, support and feedback see Beta announce post.

     

    Technorati Tags: , , , , ,

    posted on Saturday, 24 February 2007 00:31:51 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Sunday, 11 February 2007

    by Serge Baranovsky

    New build of CodeIt.Right Beta is available for download

    Change list:

    • IMPROVED: Windows Vista compatibility. CodeIt.Right still requires you run VS2005 "As Administrator" but so is VS2005 itself. We all hope that expected soon VS2005 SP1 update for Vista will help to address the Vista issues for Visual Studio
    • FIXED: Now compatible with 64 bit versions of Windows.
    • ADDED: Auto-generated Designer.cs and Designer.vb files are now ignored
    • FIXED: Rule "Types that own disposable fields should be disposable" ignores static classes
    • FIXED: setup doesn't run "devenv.exe /setup" - install/uninstall speed improved, resolves issues related to resetting 3rd-party VS Add-In custom settings and re-enabling disabled Add-Ins.

    We are still working on the memory use optimization.

    NOTE: Please take a look at the Flash presentations as they help to understand CodeIt.Right better.

    Build 1.0.07040 can be downloaded here - http://submain.com/download/codeit.right

    For more information on getting started with CodeIt.Right, support and feedback see our previous post.

     

    Technorati Tags: , , , , ,

    posted on Sunday, 11 February 2007 02:17:19 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     Thursday, 01 February 2007

    by Serge Baranovsky

    We have exciting news for you - today we released first public Beta version of our new product CodeIt.Right - the tool that will find your code and performance problems, fix them automatically, keep your naming consistent and will guide you through most common coding patterns and best practices.

    Click here to read about CodeIt.Right

    CodeIt.Right Beta integrates with VS2003 and VS2005, it supports C# and VB.NET

    Download now

    Download your copy of the CodeIt.Right Beta at http://submain.com/download/codeit.right

    ** IMPORTANT: Please keep in mind this is still a Beta product. It is not recommended to install it on a mission critical development machines.

    Support / Feedback

    The help file and user guide are not quite ready yet for this is a Beta, but please feel free to ask any questions you may have.  Please submit any issues or feedback using one of the following:
    Email: Contact Support
    Community: http://community.submain.com
    Forum: CodeIt.Right Discussion Forum

    Screenshots / Guides

    We have prepared 3 Flash presentations to help you understand what CodeIt.Right is and how you can get your job done better and faster using CodeIt.Right:
    Quick Start - Quick 1 minute CodeIt.Right walkthrough

    Introduction to CodeIt.Right Features - Here is where we show main CodeIt.Right functions and options

    ISerializable Pattern example - One of many scenarios when CodeIt.Right helps to diagnose issues early and implement coding patterns correctly.

    More guides and tutorials to come...

    Getting Started

    Quick 1 minute CodeIt.Right walkthrough - Quick Start Guide


    Technorati Tags: , , , , ,

    posted on Thursday, 01 February 2007 20:21:57 (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
     

     
         
     
    Home |  Products |  Services |  Download |  Purchase |  Support |  Community |  About Us |