Have a question? Email Us or Call 1 (800) 936-2134
SubMain - CodeIt.Right The First Time! Home Products Services Download Purchase Support Community
 Thursday, September 29, 2016

In professional contexts, I think that the word "standard" has two distinct flavors.  So when we talk about a "team standard" or a "coding standard," the waters muddy a bit.  In this post, I'm going to make the case for a team standard.  But before I do, I think it important to discuss these flavors that I mention.  And keep in mind that we're not talking dictionary definition as much as the feelings that the word evokes.

blog-case-for-team-standardFirst, consider standard as "common."  To understand what I mean, let's talk cars.  If you go to buy a car, you can have an automatic transmission or a standard transmission.  Standard represents a weird naming choice for this distinction since (1) automatic transmissions dominate (at least in the US) and (2) "manual" or "stick-shift" offer much better descriptions.  But it's called "standard" because of historical context.  Once upon a time, automatic was a new sort of upgrade, so the existing, default option became boringly known as "standard."

In contrast, consider standard as "discerning."  Most commonly you hear this in the context of having standards.  If some leering, creepy person suggested you go out on a date to a fast food restaurant, you might rejoin with, "ugh, no, I have standards."

Now, take these common contexts for the word to the software team room.  When someone proposes coding standards, the two flavors make themselves plain in the team members' reactions.  Some like the idea, and think, "it's important to have standards and take pride in our work."  Others hear, "check your creativity at the gate, because around here we write standard, default code."

What I Mean by Standard

Now that I've drawn the appropriate distinction, I feel it appropriate to make my case.  When I talk about the importance of a standard, I speak with the second flavor of the word in mind.  I speak about the team looking at its code with a discerning attitude.  Not just any code can make it in here -- we have standards.

These can take somewhat fluid forms, and I don't mean to be prescriptive.  The sorts of standards that I like to see apply to design principles as much as possible and to cosmetic concerns only when they have to.

For example, "all non-GUI code should be test driven" and "methods with more than 20 lines should require a conversation to justify them" represent the sort of standards I like my teams to have.  They say, "we believe in TDD" and "we view long methods as code smells," respectively.  In a way, they represent the coding ethos of the group.

On the other side of the fence lie prescriptions like, "all class fields shall be prepended with underscores" and "all methods shall be camel case."  I consider such concerns cosmetic, since they are appearance and not design or runtime behavior.  Cosmetic concerns are not important... unless they are.  If the team struggles to read code and becomes confused because of inconsistency, then such concerns become important.  If the occasional quirk presents no serious readability issues, then prescriptive declarations about it stifle more than they help.

Having standards for your team's work product does not mean mandating total homogeneity.

Why Have a Standard at All?

Since I'm alluding to the potentially stifling effects of a team standard, you might reasonably ask why we should have them at all.  I can assert that I'm interested in the team being discerning, but is it really just about defining defaults?  Fair enough.  I'll make my case.

First, consider something that I've already mentioned: maintenance.  If the team can easily read code, it can more easily maintain that code.  Logically, then, if the team all writes fairly similar code, they will all have an easier time reading, and thus maintaining that code.  A standard serves to nudge teams in this direction.

Another important benefit of the team standard revolves around the integrity of the work product.  Many team's standards incorporate methodology for security, error handling, logging, etc.  Thus the established standard arms the team members with ways to ensure that the software behaves properly.

And finally, well-done standards can help less experienced team members learn their craft.  When such people join the team, they tend to look to established folks for guidance.  Sadly, those people often have the most on their plate and the least time.  The standard can thus serve as teacher by proxy, letting everyone know the team's expectations for good code.

Forget the Conformity (by Automating)

So far, all of my rationale follows a fairly happy path.  Adopt a team standard, and reap the rewards: maintainability, better software, learning for newbies.  But equally important is avoiding the dark side of team standards.  Often this dark side takes the form of nitpicking, micromanagement and other petty bits of nastiness.

Please, please, please remember that a standard should not elevate conformity as a virtue.  It should represent shared values and protection of work product quality.  Therefore, in situations where conformity (uniformity) is justified, you should automate it.  Don't make your collaborative time about telling people where to put spaces and brackets -- program your IDE to do that for you.

Make Justification Part of the Standard

Another critical way to remove the authoritarian vibe from the team standard is one that I rarely see.  And that mystifies me a bit because you can do it so easily.  Simply make sure you justify each item contained in the standard.

"Methods with more than 20 line of code should prompt a conversation," might find a home in your standard.  But why not make it, "methods with more than 20 lines of code should prompt a conversation because studies have demonstrated that defect rate increases more than linearly with lines of code per method?"  Wow, talk about powerful.

This little addition takes the authoritarian air out of the standard, and it also helps defuse squabbles.  And, best of all, people might just learn something.

If you start doing this, you might also notice that boilerplate items in a lot of team standards become harder to justify.  "Prepend your class fields with m underscore" becomes "prepend your class fields with m underscore because... wait, why do we do that again?"

Prune and Always Improve

When you find yourself trailing off at because, you have a problem.  Something exists in your team standard that you can't justify.  If no one can justify it, then rip it out.  Seriously, get rid of it.  Having items that no one can justify starts to put you in conformity for the sake of conformity territory.  And that's when standard goes from "discerning" to "boring."

Let this philosophy guide your standard in general.  Revisit it frequently, and audit it for valid justifications.  Sometimes justifications will age out of existence or seem lame in retrospect.  When this happens, do not hesitate to revisit, amend, or cull.  The best team standards are neither boring nor static.  The best team standards reflect the evolving, growing philosophy of the team.

Related resources

Tools at your disposal

SubMain offers CodeIt.Right that easily integrates into Visual Studio for flexible and intuitive automated code review solution that works real-time, on demand, at the source control check-in or as part of your build.

Learn more how CodeIt.Right can automate your team standards and improve code quality.

About the Author

Erik Dietrich

I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

posted on Thursday, September 29, 2016 7:41:00 AM (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
 Tuesday, September 20, 2016

If you write software, the term "feedback loop" might have made its way into your vocabulary.  It charts a slightly indirect route from its conception and into the developer lexicon, though, so let's start with the term's origin.  A feedback loop in general systems uses its output as one of its inputs.

Kind of vague, huh?  I'll clarify with an example.  I'm actually writing this post from a hotel room, so I can see the air conditioner from my seat.  Charlotte, North Carolina, my temporary home, boasts some pretty steamy weather this time of year, so I'm giving the machine a workout.  Its LED display reads 70 Fahrenheit, and it's cranking to make that happen.

When the AC unit hits exactly 70 degrees, as measured by its thermostat, it will take a break.  But as soon as the thermostat starts inching toward 71, it will turn itself back on and start working again.  Such is the Sisyphean struggle of climate control.

Important for us here, though, is the mechanics of this system.  The AC unit alters the temperature in the room (its output).  But it also uses the temperature in the room as input (if < 71, do nothing, else cool the room).  Climate control in buildings operates via feedback loop.

Appropriating the Term for Software Development

It takes a bit of a cognitive leap to think of your own tradecraft in terms of feedback loops.  Most likely this happens because you become part of the system.  Most people find it harder to reason about things from within.

In software development, you complete the loop.  You write code, the compiler builds it, the OS runs it, you observe the result, and decide what to do to the code next.  The output of that system becomes the input to drive the next round.

If you have heard the term before, you've probably also heard the term "tightening the feedback loop."  Whether or not you've heard it, what people mean by this is reducing the cycle time of the aforementioned system.  People throwing that term around look to streamline the write->build->run->write again process.

A History of Developer Feedback Loops

At the risk of sounding like a grizzled old codger, let me digress for a moment to talk about feedback loop history.  Long before my time came the punched card era.  Without belaboring the point, I'll say that this feedback loop would astound you, the modern software developer.

Programmers would sit at key punch "kiosks", used to physically perforate forms (one mistake, and you'd start over).  They would then take these forms and have operators turn them into cards, stacks of which they would hold onto.  Next, they'd wait in line to feed these cards into the machines, which acted as a runtime interpreter.   Often, they would have to wait up to 24 hours to see the output of what they had done.

Can you imagine?  Write a bit of code, then wait for 24 hours to see if it worked.  With a feedback loop this loose, you can bet that checking and re-checking steps received hyper-optimization.

blog-developer-feedback-loop

When I went to college and started my programming career, these days had long passed.  But that doesn't mean my early days didn't involve a good bit of downtime.  I can recall modifying C files in projects I worked, and then waiting up to an hour for the code to build and run, depending what I had changed.  xkcd immortalized this issue nearly 10 years ago, in one of its most popular comics.

Today, you don't see this as much, though certainly, you could find some legacy codebases or juggernauts that took a while to build.  Tooling, technique, modern hardware and architectural approaches all combine to minimize this problem via tighter feedback loops.

The Worst Feedback Loop

I have a hypothesis.  I believe that a specific amount of time exists for each person that represents the absolute, least-optimal amount of time for work feedback.  For me, it's about 40 seconds.

If I make some changes to something and see immediate results, then great.  Beyond immediacy, my impatience kicks in.  I stare at the thing, I tap impatiently, I might even hit it a little, knowing no good will come.  But after about 40 seconds, I simply switch my attention elsewhere.

Now, if I know the wait time will be longer than 40 seconds, I may develop some plan.  I might pipeline my work, or carve out some other tasks with which I can be productive while waiting.  If for instance, I can get feedback on something every 10 minutes, I'll kick it off, do some household chores, periodically checking on it.

But, at 40 seconds, it resides in some kind of middle limbo, preventing any semblance of productivity.  I kick it off and check twitter.  40 seconds turns into 5 minutes when someone posts a link to some cool astronomy site.  I check back, forget what I did, and then remember.  I try again and wait 40 seconds.  This time, I look at a Buzzfeed article and waste 10 minutes as that turns into 4 Buzzfeed articles.  I then hate myself.

The Importance of Tightening

Why do I offer this story about my most sub-optimal feedback period?  To demonstrate the importance of diligence in tightening the loop.  Wasting a few seconds while waiting hinders you.  But waiting enough seconds to distract you with other things slaughters your productivity.

With software development, you can get into a state of what I've heard described as "flow."  In a state of flow, the feedback loop creates harmony in what you're doing.  You make adjustments, get quick feedback, feel encouraged and productive, which promotes more concentration, more feedback, and more productivity.  You discover a virtuous circle.

But just the slightest dropoff in the loop pops that bubble.  And, another dropoff from there (e.g. to 40 seconds for me) can render you borderline-useless.  So much of your professional performance rides on keeping the loop tight.

Tighten Your Loop Further

Modern tooling offers so many options for you.  Many IDEs will perform speculative compilation or interpretation as you code, making builds much faster.  GUI components can be rendered as you work, allowing you to see changes in real time as you alter the markup.  Unit tests slice your code into discrete, separately evaluated components, and continuous testing tools provide pass/fail feedback as you type.  Static code analysis tools offer you code review as you work, rather than at some code review days later.  I could go on.

The general idea here is that you should constantly seek ways to tune your day to day work.  Keep your eyes out for tools that speed up your feedback loop.  Read blogs and go to user groups.  Watch your coworkers for tips and tricks.  Claw, scratch, and grapple your way to shaving time off of your feedback loop.

We've come a long way from punch cards and sword fights while code compiles.  But, in 10 or 30 years, we'll look back in amazement at how archaic our current techniques seem.  Put yourself at the forefront of that curve, and you'll distinguish yourself as a developer.

Learn more how CodeIt.Right can tighten the feedback loop and improve your code quality.

About the Author

Erik Dietrich

I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

posted on Tuesday, September 20, 2016 7:37:00 AM (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
 Friday, September 16, 2016

Version 5.3 of GhostDoc is a maintenance update for the v5.0 users:

  • Added full support for string interpolation in C# and VB parsers
  • Added support for "arrow functions" in JavaScript parser
  • Fixed "File is not part of a solution" issue when loading projects
  • (Pro) (Ent) Added IsAbstract property to CurrentCodeElement in the T4 templates
  • Improved exception documentation - now the type name in a nameof() parameter is added as part of the generated documentation template
  • (Ent) Fixed iue when using <section> along with <code> elements in an .aml file

For the complete list of changes, please see What's New in GhostDoc v5

For overview of the v5.0 features, visit Overview of GhostDoc v5.0 Features

Download the new build at http://submain.com/download/ghostdoc/

posted on Friday, September 16, 2016 8:30:00 AM (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
 Wednesday, September 14, 2016

Think back to college (or high school, if applicable).  Do you remember that kid that would sit near the front of the class and gleefully point out that the professor had accidentally omitted an apostrophe when writing notes on the white board?  Didn't you just love that kid?  Yeah, me neither.

Fate imbues a small percentage of the population with a neurotic need to correct any perceived mistakes made by anyone.  XKCD immortalized this phenomenon with one of its most famous cartoons, that declared, "someone is wrong on the internet."  For the rest of the population, however, this tendency seems pedantic and, dare I say, unpleasant.  Just let it go, man.  It doesn't matter that much.

I mention all of this to add context to the remainder of the post.  I work as a consultant and understand the need for diplomacy, tact, and choosing one's battles.  So, I do not propose something like care with spelling lightly.  But I will propose it, nonetheless.

Now I know what you're thinking.  How can caring about spelling in code be anything but pedantic?  We're not talking about something being put together to impress a wide audience, like a newspaper.  In fact, we're not even talking about prose.  And code contains all sorts of abbreviations and encodings and whatnot.

Nevertheless, it matters.  When English words occur in your code, spelling them right matters.  I'll use the rest of this post to make my case.

The IntelliSense Conundrum

If you use Visual Studio, no doubt you make heavy use of IntelliSense.  To expand, any IDE or text editor with autocomplete functionality qualifies for consideration here.  In either case, your tooling gives you a pretty substantial boost by suggesting methods/variables/classes/etc based on what you have typed.  It's like type-ahead for code.

Now think of the effect a misspelling can have here, particularly near the beginning of a word.  Imagine implementing a method that would release resources and accidentally typing Colse instead of Close.  Now imagine consuming that method.  If you're used to exploring APIs and available methods with auto-complete, you might type, "Clo", pause, and see no matching methods.  You might then conclude, "hey, no call to Close needed!"

In all likelihood, such an error would result in a few minutes of head-scratching and then the right call.  But even if that's the worst of it, that's still not great.  And it will happen each and every time someone uses your code.

Other Manual Typing Errors

The scope of this particular issue goes beyond auto-complete functionality.  Perhaps you lack that functionality in your environment, or perhaps you simply don't use it much.  In that case, you'll be hand typing your code.

Now, imagine hand typing the call above to a close method.  Do you instinctively type "Colse" or do you instinctively type "Close?"  So what do you think will happen?

You'll expect the call to be Close and you'll type that.  Then, you'll stare in disbelief for a moment at the compiler message.  You'll probably do a clean and rebuild.  You'll stare again for a while and squint.  Then, finally, you'll smack your forehead, realize the problem, and silently berate the person who misspelled the method name.

Again, the impact remains the same.  Most likely this creates only friction and annoyance.  Every now and then, it may trigger a thoroughly incorrect use of a library or API.

Anchoring Effect

Moving away from the theme of confusion when using a declared member, consider the declaration itself.  During the use of a variable/method/class/etc, you must spell it right before the compiler allows you to proceed (assuming a strongly typed language).  With the original declaration, however, you have the freedom to spell things wrong to your heart's content.  When you do this, the original copy holds the error.

That first misspelling allows for easy correction.  Same goes when you've used it only a time or two.  But as usage grows and spreads throughout the codebase, the fix becomes more and more of a chore.  Before long (and without easy refactoring tools), the chore becomes more than anyone feels like tackling, and the error calcifies in place.

Your unaddressed spelling mistake today makes fixes more difficult tomorrow.

Comprehension Confusion

Let's switch gears again and consider the case of a maintenance programmer reading for comprehension.  After all, programmers do a whole lot more reading of code than they do modification of it.  So, a casual read is a likely situation.

Spelling errors cloud comprehension.  A simple transposition of characters or a common error, such as referring to a "dependency" do not present an insurmountable problem.  But a truly mangled word can leave readers scratching their heads and wondering what the code actually means, almost as if you'd left some kind of brutal Hungarian notation in there.

Taking the time to get the spelling right ensures that anyone maintaining the code will not have this difficulty.  Code is hard enough to understand, as-is, without adding unforced errors to the mix.

The Embarrassment Factor

And, finally, there's the embarrassment factor.  And I don't mean the embarrassment of your coworkers saying, "wow, that guy doesn't know how to spell!"  I'm talking about the embarrassment factor for the team.

Think of new developers hiring on or transferring into the group.  They're going to take a look at the code and draw conclusions, about your team.  Software developers tend to have exacting, detail-oriented minds, and they tend to notice mistakes.  Having a bunch of spelling mistakes in common words makes it appear either that the team doesn't know how to spell or that it has a sloppy approach.  Neither of those is great.

But also keep in mind that what happens in the code doesn't always stay in the code.  Bits of the code you write might appear on team dashboards, build reports, unit test run outputs, etc.  People from outside of the team may be examining acceptance tests and the like.  And, you may have end-user documentation generated automatically using your code (i.e. if you make developer tools or APIs).  Do you really want the documentation you hand to your customers to contain embarrassing mistakes?

It's Easy to Get Right

At this point, I'm finished with the supply of arguments for making the case.  I've laid these out.

But, by way of closing words, I'd like to comment on what might be the biggest shame of the whole thing.  Purging your code of spelling errors doesn't require you to be an expert speller.  It doesn't require you to copy source code into MS Word or something and run a check.  You have tools at your disposal that will do this for you, right in your IDE.  All you need to do is turn them on.

I recommend that you do this immediately.  It's easy, unobtrusive, and offers only upside.  And not only will you excise spelling mistakes from your code -- you'll also prevent that annoying kid in the front of the class from bothering you about stuff you don't have time for.

Learn more about GhostDoc's truly source code spell checker and eliminate embarrassing typos in your apps and documentation before you ship them.

About the Author

Erik Dietrich

I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

posted on Wednesday, September 14, 2016 7:06:00 AM (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
 Wednesday, August 24, 2016

In the world of programming, 15 years or so of professional experience makes me a grizzled veteran.  That certainly does not hold for the work force in general, but youth dominates our industry via the absolute explosion of demand for new programmers.  Given the tendency of developers to move around between projects and companies, 15 years have shown me a great deal of variety.

Perhaps nothing has exemplified this variety more than the code review.  I've participated in code reviews that were grueling, depressing marathons.  On the flip side, I've participated in ones where I learned things that would prove valuable to my career.  And I've seen just about everything in between.

Our industry has come to accept that peer review works.  In the book Code Complete, author Steve McConnell cites it, in some circumstance, as the single most effective technique for avoiding defects.  And, of course, it helps with knowledge transfer and learning.  But here's the rub -- implemented poorly, it can also do a lot of harm.

Today, I'd like to make the case for the automated code review.  Let me be clear.  I do not view this as a replacement for any manual code review, but as a supplement and another tool in the tool chest.  But I will say that automated code review carries less risk than its manual counterpart of having negative consequences.

The Politics

I mentioned extremely productive code reviews.  For me, this occurred when working on a team with those I considered friends.  I solicited opinions, got earnest feedback, and learned.  It felt like a group of people working to get better, and that seemed to have no downside.

But I've seen the opposite, too.  I've worked in environments where the air seemed politically charged and competitive.  Code reviews became religious wars, turf battles, and arguments over minutiae.  Morale dipped, and some people went out of their way to find ways not to participate.  Clearly no one would view this as a productive situation.

With automated code review, no politics exist.  Your review tool is, of course, incapable of playing politics.  It simply carries out its mission on your behalf.  Automating parts of the code review process -- especially something relatively arbitrary such as coding standards compliance -- can give a team many fewer opportunities to posture and bicker.

Learning May Be Easier

As an interpersonal activity, code review carries some social risk.  If we make a silly mistake, we worry that our peers will think less of us.  This dynamic is mitigated in environments with a high trust factor, but it exists nonetheless.  In more toxic environments, it dominates.

Having an automated code review tool creates an opportunity for consequence-free learning.  Just as the tool plays no politics, it offers no judgment.  It just provides feedback, quietly and anonymously.

Even in teams with a supportive dynamic, shy or nervous folks may prefer this paradigm.  I'd imagine that anyone would, to an extent.  An automated code review tool points out mistakes via a fast feedback loop and offers consequence-free opportunity to correct them and learn.

Catching Everything

So far I've discussed ways to cut down on politics and soothe morale, but practical concerns also bear mentioning.  An automated code review tool necessarily lacks the judgment that a human has.  But it has more thoroughness.

If your team only performs peer review as a check, it will certainly catch mistakes and design problems.  But will it catch all of them?  Or is it possible that you might miss one possible null dereference or an empty catch block?  If you automate the process, then the answer becomes "no, it is not possible."

For the items in a code review that you can automate, you should, for the sake of thoroughness.

Saving Resources and Effort

Human code review requires time and resources.  The team must book a room, coordinate schedules, use a projector (presumably), and assemble in the same location.  Of course, allowing for remote, asynchronous code review mitigates this somewhat, but it can't eliminate the salary dollars spent on the activity.  However you slice it, code review represents an investment.

In this sense, automating parts of the code review process has a straightforward business component.  Whenever possible and economical, save yourself manual labor through automation.

When there are code quality and practice checks that can be done automatically, do them automatically.  And it might surprise you to learn just how many such things can be automated.

Improbable as it may seem, I have sat in code reviews where people argued about whether or not a method would exhibit a runtime behavior, given certain inputs.  "Why not write a unit test with those inputs," I asked.  Nobody benefits from humans reasoning about something the build, the test suite, the compiler, or a static analysis tool could tell them automatically.

Complimentary Approach

As I've mentioned throughout this post, automated code review and manual code review do not directly compete.  Humans solve some problems better than machines, and vice-versa.  To achieve the best of all worlds, you need to create a complimentary code review approach.

First, understand what can be automated, or, at least, develop a good working framework for guessing.  Coding standard compliance, for instance, is a no-brainer from an automation perspective.  You do not need to pay humans to figure out whether variable names are properly cased, so let a review tool do it for you.  You can learn more about the possibilities by simply downloading and trying out review and analysis tools.

Secondly, socialize the tooling with the team so that they understand the distinction as well.  Encourage them not to waste time making a code review a matter of checking things off of a list.  Instead, manual code review should focus on architectural and practice considerations.  Could this class have fewer responsibilities?  Is the builder pattern a good fit here?  Are we concerned about too many dependencies?

Finally, I'll offer the advice that you can use the balance between manual and automated review based on the team's morale.  Do they suffer from code review fatigue?  Have you noticed them sniping a lot?  If so, perhaps lean more heavily on automated review.  Otherwise, use the automated review tools simply to save time on things that can be automated.

If you're currently not using any automated analysis tools, I cannot overstate how important it is that you check them out.  Our industry built itself entirely on the premise of automating time-consuming manual activities.  We need to eat our own dog food.

Related resources

Tools at your disposal

SubMain offers CodeIt.Right that easily integrates into Visual Studio for flexible and intuitive automated code review solution that works real-time, on demand, at the source control check-in or as part of your build.

Learn more how CodeIt.Right can help with automated code review and improve your code quality.

About the Author

Erik Dietrich

I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

posted on Wednesday, August 24, 2016 2:06:00 PM (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
 Thursday, August 18, 2016

Notwithstanding some oddball calculator and hobby PC hacking, my first serious programming experience came in college.  A course called "Intro to C++" got us acquainted with arrays, loops, data structures and the like.  Given its introductory nature, this class did not pose a particularly serious challenge (that would come later).  So, with all of the maturity generally possessed by 18 year olds, we had a bit of fun.

I recall contests to see how much application logic we could jam into the loop conditions, and contests to see how much code could be packed onto one line.  These sorts of scavenger hunt activities obviously produced dense, illegible code.  But then, that was kind of the point.

Beyond these silly hijinks, however, a culture of code illegibility permeated this (and, I would learn later) other campuses.  Professors nominally encouraged code readability.  After all, such comments facilitated partial credit in the event of a half-baked homework submission.  But, even still, the mystique of the ingenious but inscrutable algorithm pervaded the culture both for students and faculty.  I had occasion to see code written by various professors, and I noticed no comments that I can recall.

Professionalism via Thoroughness

When I graduated from college, I carried this culture with me.  But not for long.  I took a job where I spent most of my days working on driver and kernel module programming.  There, I noticed that the grizzled veterans to whom I looked up meticulously documented their code.  Above each function sat a neat, orderly comment containing information about its purpose, parameters, return values, and modification history.

This, I realized, was how professionals conducted themselves.  I was hooked.  Fresh out of college, and looking to impress the world, I sought to distinguish myself from my undisciplined student ways.  This decision ushered in a period of many years in which I documented my code with near religious fervor.

My habit included, obviously, the method headers that I emulated.  But on top of that, I added class headers and regularly peppered my code with line comments that offered such wisdom as "increment the loop counter until the end of the array."  (Okay, probably not that bad, but you get the idea).  I also wrote lengthy readme documents for posterity and maintenance programmers alike.  My professionalism knew no bounds.

Clean Code as Plot Twist

Eventually, I moved on from that job, but carried my habits with me.  I wrote different code for different purposes in different domains, but stayed consistent in my commenting diligence.  This I wore as a badge of pride.

While I was growing in my career, I started to draw inspiration from the clean code movement.  I began to write unit tests, I practiced the SOLID principles, I watched Uncle Bob talks, made my methods small, and sought to convince others to do the same.  Through it all, I continued to write comments.

But then something disconcerting happened.  In the clean code circles I followed and aspired to, I started to see posts like this one.  In it, the author had written extensively about comments as a code smell.

Comments are a great example of something that seems like a Good Thing, but turn out to cause more harm than good.

For a while, I dismissed this heresy as an exception to the general right-thinking of the clean code movement.  I ignored it.  But it nagged at me nonetheless, and eventually, I had to confront it.

When I finally did, I realized that I had continued to double down on a practice simply because I had done it for so long.  In other words, the extensive commenting represented a ritual of diligence rather than something in which I genuinely saw value.

Down with Comments

Once the floodgates had opened, I did an about-face.  I completely stopped writing comments of any sort whatsoever, unless it was part of the standard of the group I was working with.

The clean coder rationale flooded over me and made sense.  Instead of writing inline comments, make the code self-documenting.  Instead of comments in general, write unit and acceptance tests that describe the desired behaviors.  If you need to explain in English what your code does, you have failed to explain with your code.

Probably most compelling of all, though, was the tendency that I'd noticed for comments to rot.  I cannot begin to estimate how many times I dutifully wrote comments about a method, only to return a year later and see that the method had been changed while the comments had not.  My once-helpful comments now lied to anyone reading them, making me look either negligent or like an idiot.  Comments represented duplication of knowledge, and duplication of knowledge did what it always does: gets out of sync.

My commenting days were over.

Best of All Worlds

That still holds true to this day.  I do not comment my code in the traditional sense.  Instead, I write copious amounts of unit, integration and acceptance tests to demonstrate intent.  And, where necessary and valuable, I generate documentation.

Let's not confuse documentation and commenting.  Commenting code targets maintenance programmers and team members as the intended audience.  Documenting, on the other hand, targets external consumers.  For instance, if I maintained a library at a large organization, and other teams used that library, they would be external consumers rather than team members.  In effect, they constitute customers.

If we think of API consumers as customers, then generating examples and documentation becomes critically important.  In a sense, this activity is the equivalent of designing an intuitive interface for end-users of a GUI application.  They need to understand how to quickly and effectively make the most of what you offer.

So if you're like me -- if you believe firmly in the tenets of the clean code movement -- understand that comments and documentation are not the same thing.  Also understand that documentation has real, business value and occupies an important role in what we do.  Documentation may take the form of actual help documents, files, or XML-doc style comments that appear in IntelliSense implementations.

To achieve the best of all worlds, avoid duplication.  Make publishing documentation and examples a part of your process and, better yet, automate these activities.  Your code will stay clean and maintainable and your API users will be well-informed and empowered to use your code.

Learn more about how GhostDoc can help simplify your XML Comments, produce and maintain quality help documentation.

About the Author

Erik Dietrich

I'm a passionate software developer and active blogger. Read about me at my site. View all posts by Erik Dietrich

posted on Thursday, August 18, 2016 7:45:00 AM (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
 Monday, July 18, 2016

Version 5.2 of GhostDoc is a minor feature release for the v5.0 users includes:

  • Support for Visual Studio 2015 Update 3
  • Fixes for the latest ASP.NET Core projects
  • GhostDoc now treats underscore as a delimiter to improve summary generation for underscore delimited identifiers
  • "Use Modern URLs" Help Configuration option for declarative help documentation file naming - namespace-classname-membername.htm
  • Option to turn on/off Documentation Hints during setup
  • (Pro) (Ent)Comment Preview is now rendered using the FlatGray theme
  • Plenty of improvements and bug fixes

For the complete list of changes, please see What's New in GhostDoc v5

For overview of the v5.0 features, visit Overview of GhostDoc v5.0 Features

This version is a required update for Visual Studio 2015 Update 3 users.

Download the new build at http://submain.com/download/ghostdoc/

posted on Monday, July 18, 2016 6:07:00 PM (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
 Wednesday, May 4, 2016

The Beta for CodeIt.Right v3 has arrived – the new major version of our automated code review and code quality analysis product. Here are the new version highlights:

  • Official support for VS2015 Update 2 and ASP.NET 5/ASP.NET Core 1.0 solutions
  • New Review Code commands:
    • only opened files
    • only checked out files
    • only files modified after specific date
  • Improved Profile Editor with advanced rule search and filtering
  • Improved look and feel for Violations Report and Editor violation markers
  • New rules
  • Setting to keep the OnDemand and Instant Review profiles in sync
  • New Jenkins integration plugin
  • Batch correction is now turned off by default
  • Most every CodeIt.Right action now can be assigned a keyboard shortcut
  • Preview of the new Dashboard feature

For the complete and detailed list of the v3.0 changes see What's New in CodeIt.Right v3.0

To give the v3.0 Beta a try, download it here - http://submain.com/download/codeit.right/beta/

Please Note: while our early adopters indicate that the v3.0 Beta has been very stable for them, still, all the usual Beta software advisory provisions apply.

 

New Review Code commands

cir3-baseline-filtering

We have renamed the Start Analysis menu to Review Code – still the same feature and the new name is just highlighting the automated code review nature of the product. The

  • Analyze Open Files command - analyze only the files opened in Visual Studio tabs
  • Analyze Checked Out Files command - analyze only files that that are checked out from the source control
  • Analyze Modified After – analyze only files that have been modified after specific date

Known Beta issue – when pressed Update only updates the code review criteria but still requires to run the Review Code command manually. In the release version we will run code review when the Update is pressed.

 

cir3-profile-filter

Improved Profile Editor

The Profile Editor now features

  • Advanced rule filtering by rule id, title, name, severity, scope, target, and programming language
  • Allows to quickly show only active, only inactive or all rules in the profile
  • Shows totals for the profile rules - total, active, and filtered
  • Improved adding rules with multiple categories

 

Dashboard Preview

While is not what we see it finally looking, an early preview of the Dashboard feature has been shipped with the Beta to give you a rough idea what we are after – provide you with a code quality dashboard view that you customize to your needs.

 

Feedback

We would love to hear your feedback on the new features! Please email it to us at support@submain.com or post in the CodeIt.Right v3 Beta Forum.

.

posted on Wednesday, May 4, 2016 6:31:00 AM (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
 Friday, February 5, 2016

Version 5.1 of GhostDoc is a maintenance release for the v5.0 users; includes minor enhancements and number of important bug fixes. Many of the fixes are relevant to the Visual Studio 2015 environment, so while everyone will benefit from this update, it is highly recommended for the Visual Studio 2015 users.

For the complete list of changes, please see http://support.submain.com/kb/a42/whats-new-in-ghostdoc-v5.aspx

For overview of the v5.0 features, visit http://submain.com/blog/ReleasedGhostDocV50.aspx

posted on Friday, February 5, 2016 7:33:00 PM (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
 Monday, November 23, 2015
Note to GhostDoc Pro v4 users: The v4 licenses won’t work with the v5. We have sent out the v5 license codes to users with License Protection and active Software Assurance subscription. If you have not received or misplaced your new license, you can retrieve it on the My Account page. See more information at the bottom of this post.

Both Pro and Enterprise editions of GhostDoc in version 5 introduce Documentation Quality hints in Visual Studio editor; Documentation Management assistance - find auto-generated comments, edit or remove the bulk created docs; identify and fix comments that are missing, out of sync or can be copied from base class; mark generated XML comments as auto-generated and "to be edited". The v5 also includes multiple Help documentation themes and layouts to choose from.

The free version of GhostDoc has been re-branded as GhostDoc Community Edition and adds general improvements, limited generation of CHM help documentation as well as the means to find auto-generated comments.

GD_v5_new_commands

The new menu commands

  • Documentation Quality Hints in Visual Studio editor
  • Documentation Maintenance - Find auto-generated comments - edit or remove the bulk created docs
  • Documentation Maintenance - Identify and fix comments that are missing, out of sync or can be copied from base class
  • Theme support for generated help documentation and new themes - Flat Gray and Flat Main
  • Official Visual Studio 2015 support
  • Options to add Auto-generated doc and TODO 'Edit' attributes
  • Option to have the default summary text focused and selected when using Document This command - allows to edit/override the summary quickly
  • Exclude from Documentation action – marks a member with a tag to exclude it from the help documentation
  • Hide/Show Comments feature – an easy way to expand/collapse documentation comments to minimize the XML Comments footprint in the Visual Studio code editor
  • New Summary Override table in Options - configure predefined summaries for specific member or type names instead of auto-generated
  • A basic Build Documentation feature is now available in the Community Edition of GhostDoc – while quite limited and watermarked, yet allows to produce simple CHM help documentation for personal use without paying for the commercial version

For the detailed list of v5.0 changes see What’s New in GhostDoc v5.

To see new features by product edition see this page - http://submain.com/ghostdoc/editions/


Documentation Quality Hints

This new feature provides real-time visual hints in the Visual Studio Editor window to highlight members which have documentation issues that require attention.

GD_v5_maint_hints

The following documentation hint actions included with this release make it very easy to maintain the documentation quality:

GD_v5_maint_hints_list


Documentation Maintenance

This feature will help you identify missing documentation, find auto-generated XML comments, maintain your documentation, and keep it up to date. Once these are found, GhostDoc provides the tools to edit or remove the bulk created docs, add missing or fix the dated documentation – one by one or as a batch. You can fine tune the search criteria and use your own template library if yours is different from the built-in.

  • Find auto-generated docs and edit or remove them
  • Find and fix members that are missing documentation
  • Discover members that have parameters, return types, and type parameters out of sync with the existing XML comments and fix the comments
  • Find members that can have XML docs copied from the base class
  • Find documentation that require editing

GD_v5_maint_autogen

The Community Edition only allows to find auto-generated documentation and not batch actions – only one action at a time.


Help Documentation Themes

In the v5 we are introducing theme support for the generated help documentation and including two new themes, The old help doc view preserved as the Classic theme. You can see the new theme preview here - Flat Gray (default) and Flat Main.

The Enterprise Edition users can modify the existing themes or create and deploy own help documentation themes – now easier than ever!

The Community Edition theme selection is limited to one – Classic.

GD_v5_help_sample


Auto-generated doc and TODO 'Edit' attributes

The option to add tag to XML comment is intended to provide an explicit flag that the comment has been generated automatically.

The option to add a TODO comment “TODO Edit XML Comment Template for {member name}” which in turn adds a TODO task into the Visual Studio Task List –> Comments as a reminder for the auto-generated comment requires editing.

GD_v5_autogen_todo

Both flags can be used as additional criteria for the documentation quality hints and documentation management “Find auto-generated Documentation” feature. When generating help documentation these flags are also accounted for – the flagged members can be included, ignored or highlighted in the final docs.


Summary Override

The Summary Override table allows to configure predefined summaries for specific member or type names to be used instead of the auto-generated. We ship some predefined summary overrides and you are welcome to add your own. If you find a summary override that the GhostDoc user community can benefit of, please submit it to us to be reviewed for the inclusion.

GD_v5_summary_override

 

How do I try it?

Download the v5.0 at http://submain.com/download/ghostdoc/


Feedback is what keeps us going!

Let us know what you think of the new version here - http://submain.com/support/feedback/


Note to the GhostDoc Pro v4 users
: The v4.x license codes won't work with the v5.0. For users with License Protection and active Software Assurance subscription we have sent out the v5.x license codes. If you have not received or misplaced your new license, you can retrieve it on the My Account page. Users without the License Protection or with expired Software Assurance subscription will need to purchase the new version - currently we are not offering upgrade path other than the Software Assurance subscription. For information about the upgrade protection see our Software Assurance and Support - Renewal / Reinstatement Terms

posted on Monday, November 23, 2015 8:02:00 PM (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
 Tuesday, February 24, 2015
If you didn't make it to the webinar, we recommend you watch the webinar recording first - the questions and answers below will make much more sense then.

At the last month's webinar, "Asynchronous Programming Demystified" Stephen Cleary, Microsoft MVP, and author of "Concurrency in C# Cookbook" introduced the async and await keywords and describes how they work.

During the webinar, there were a number of great questions asked from viewers that Stephen didn't have sufficient time to answer. In fact, there were 88 total questions. Fortunately, Stephen was kind enough to provide us with his answers below:

Q: You showed us how to correctly use and call async methods. But how do I create an async API out of nothing?

A: The low-level type for this is TaskCompletionSource, which allows you to complete a task manually. There are some higher-level wrappers as well, e.g., Task.Factory.FromAsync will take the old Begin/End style asynchronous methods and wrap them into a task.

Q: Can we use Async inside LINQ methods (with lambda expressions)?

A: LINQ is inherently synchronous, so there isn't much you can do asynchronously. E.g., you can use Select with an asynchronous delegate, but that gives you a sequence of tasks, and there isn't much you can do with them other than using something like Task.WhenAll. If you want an asynchronous sequence or stream abstraction, a better fit would be Reactive Extensions.

Need Async Guidance?
CodeIt.Right includes extensive Async Best Practices rule set that will guide you through the intricacies of Async. Start a no-cost 14-day trial of CodeIt.Right, SubMain's code quality analysis, automated code review and refactoring for Visual Studio.

Q: What would be the best approach to implement 3rd party synchronous library/API into let's say our existing asynchronous API? Since we does want to maintain asynchronous should we wrap it into Task Run or something else?

A: Answered in webinar

Q: Does async await help with AJAX calls?

A: Async can exist independently on the server and the client. You can use async on the client to help you call AJAX endpoints (i.e., call several of them concurrently). You can also use async on the server to help you implement AJAX endpoints.

Q: Will try-catch around await keyword really catch all exceptions that can be raised within the called async method?

A: Yes; an async method will always place its exceptions on the task it returns, and when you await that task, it will re-raise those exceptions, which can be caught by a regular try/catch.

Q: Is it true that async method is not in fact started until either await, Wait or .Result is called for it?

A: No. An async method starts when it is called. The await/Wait/Result will just wait for the method to complete.

Q: We use MSMQ for a lot of our asynchronous WCF processing. It's heavy and expensive. Can async/await replace some if not all of the MSMQ processing?

A: Async/await is not a direct replacement for any kind of queuing. You can use async to interact with the queue, though. The MessageQueue class unfortunately does not follow a standard asynchronous pattern, but you can use TaskCompletionSource to create await-compatible wrapper methods. The MSDN docs "Interop with Other Asynchronous Patterns and Types" under "Task-based Asynchronous Pattern" should get you started.

Q: IAsyncResult fits very nicely with Windows low level and IOPorts. Does async/await have the same high performance?

A: Answered in webinar

Q: Can you explain when it is appropriate to use ConfigureAwait(false)?

A: Anytime that the async method does not need its context, it should use ConfigureAwait(false). This is true for most library code.

Q: Re. Task.Run() blocking a background thread... even using await will block a thread at some point surely?

A: No, await does not block a thread. I have more details in my blog post "There Is No Thread".

Q: Do you need to tweak machine/web config to get greater throughput for asynchrony?

A: Answered in webinar

Q: What about WhenAll?

A: WhenAll can be used to concurrently execute multiple asynchronous operations.

Q: What are the main problems using ContinueWith? There a lot of companies that have this type of implementation because of legacy code.

A: ContinueWith is problematic for several reasons. For one, a single logical method must be broken up into several delegates, so the code is much more difficult to follow than a regular await. Another problem is that the defaults are not ideal; in particular, the default task scheduler is not TaskScheduler.Default as most developers assume - it is in fact TaskScheduler.Current. This unexpected task scheduler can cause issues like the one I describe in my blog post "StartNew Is Dangerous".

Q: Why is button1_Click using the async keyword, when it is calling the async method?

A: Any method that uses the await keyword must be marked async. Normally, I would make the method an "async Task" method, but since this is an event handler, it cannot return a task, so I must make it an "async void" method instead.

Q: Are there any means to debug async code easily?

A: VS2013 has pretty good support for debugging asynchronous code, and the tooling is continue to improve in this area. The one drawback to async debugging is that the call stack is not as useful. This is not a problem of async; we developers have gotten used to the idea that the call stack is a trace of how the program got to where it is - but that mental model is incorrect; the call stack is actually telling the program where to go next.I have an AsyncDiagnostics library that preserves "how the program got to where it is", which is sometimes helpful when trying to track down an issue.

Q: In ASP.NET there are many queues. What will happen when system is overloaded, and we fulfill Async IO ports. Will it throw exception or will act it as it would without async?

A: When the queues fill up, it will act the same. Async provides better scalability, but not infinite scalability. So you can still have requests timing out in the queues or being rejected if the queues fill up. Note that when the async request starts, it is removed from the queue, so async relieves pressure on the queues.

Q: Lets say I have an WinForm app. with a method that renders some image that takes 60 secs for example. When the user presses the Begin button, I want to render to occur and later say "Finished" when done, without blocking during the meantime. Can you suggest a strategy?

A: Answered in webinar

Q: Is it acceptable to create asynchronous versions of synchronous methods by just calling the synchronous methods with Task.Run

A: Answered in webinar

Q: Is it really bad to wrap async code in sync code? I thought that is a very bad practice, but have seen OAuth packages wrapping async code in sync methods with some kind of TaskHelper eg. GetUser is internally using GetUserAsync

A: The problem with library code is that sometimes you do want both asynchronous and synchronous APIs. But you don't want to duplicate your code base. It is possible to do sync-over-async in some scenarios, but it's dangerous. You have to be sure that your own code is always using ConfigureAwait(false), and you also have to be sure that any code your code calls also uses ConfigureAwait(false). (E.g., as of this writing, HttpClient does on most platforms but not all). If anyone ever forgets a single ConfigureAwait(false), then the sync-over-async code can cause a deadlock.

Q: If you have large application with lots of different things to do with async how to handle the correct "flow"? So user will not use application in wrong way. Is there best practices for this?

A: The approach I usually use is to just disable/enable buttons as I want them to be used. There is a more advanced system for UI management called Reactive UI (RxUI), but it has a higher learning curve.

Async Guidance at your fingertips!
CodeIt.Right includes extensive Async Best Practices rule set that will guide you through the intricacies of Async. Start a no-cost 14-day trial of CodeIt.Right, SubMain's code quality analysis, automated code review and refactoring for Visual Studio.

Q: Is await produces managed code in .NET? Can we write unmanaged code within await/ async blocks?

A: Await does produce managed (and safe) code. I believe unsafe code can be within an async method (though I've never tried it), but await cannot be used within an unsafe code block.

Q: Any advice with use of DAL (sync with MSSQL) to use with async call? Use Task.Run or rewrite

A: I'd recommend using the asynchronous support in EF6 to rewrite the DAL as purely asynchronous. But if you are in a situation where you need UI responsiveness and don't want to take the time to make it asynchronous, you can use Task.Run as a temporary workaround.

Q: But you do want it for CPU bound code on client UIs (WPF, WinForms, Phone, etc.)

A: Answered in webinar

Q: When I am awaiting on several tasks, is it better to use WaitAll or WhenAll?

A: WaitAll can cause deadlock issues if the tasks are asynchronous, just like Result and Wait do. So, I would recommend "await Task.WhenAll(...)" for asynchronous code.

Q: You say await Task.Run(() => Method() is Ok to do... I'm assuming it's not best practice or just not the way Stephen uses? I guess is it a common or personal practice?

A: Answered in webinar

Q: Can you explain the Server Side Scalability benefit a little more?

A: Answered in webinar

Q: If there is a use case where i have to call async call from synchronous code, what is the best way to do that?

A: "There is no good way to do sync-over-async that works in every scenario. There are only hacks, and there are some scenarios where no hack will work. So, for sure, the first and best approach is to make the calling code async; I have a blog post series on "async OOP" that covers ways to make it async even if it doesn't seem possible at first.

If you absolutely must do sync-over-async, there are a few hacks available. You can block on the async code (e.g., Result); you can execute the async code on a thread pool thread and block on that (e.g., Task.Run(() => ...).Result); or you can do a nested message loop. These approaches are all described in Stephen Toub's blog post "Should I Expose Synchronous Wrappers for My Asynchronous Methods?"

Q: Would "unit testing" be part of "Async Best Practices"? As in, would you be giving tips on best way to unit test in that future proposed webinar?

A: Answered in webinar

Q: What is the appropriate way to unit test an async method?

A: Answered in webinar

Q: The benefit : "Responsiveness on the client side" sounds like a background process. I thought async wasn't a background thing...

A: Answered in webinar

Q: I've read and heard often that another thread is not created. I'm struggling to understand how I/O is occurring without a thread managing it while the main thread is released. I comprehend how it gets back, i.e. an event of sorts picking up on the stack where it left off.

A: I have a blog post "There Is No Thread" that explains this in detail.

Q: When you implementing the IUserStore for the Identity, there are things that require you to implement a Task returning async method, however, I don't see any need to call async method. Task IUserStoreMethod(){ // no async stuff, but it requires a Task, and it cant be changed because it is from the interface. } How should I write the body? Is Task.Run() inside the method body an exception here?

A: Normally, I/O is asynchronous. So "saving" a user is an inherently I/O-bound operation, and should be asynchronous if possible. If you truly have a synchronous implementation (e.g., saving the user in memory as part of a unit test), then you can implement the asynchronous method by using Task.FromResult.

Q: Does Await spin a new thread under the hoods?

A: Answered in webinar

Q: What is the best way to call Async Methods from class constructors?

A: Answered in webinar

Q: Shouldn't the Click event handler be also renamed to ClickAsync?

A: Answered in webinar

Q: Is it possible to communicate progress from the async task?

A: Yes. An asynchronous method can report progress by taking an IProgress parameter and calling its Report method. UI applications commonly use Progress as their implementation of IProgress. There's more information on MSDN under the "Task-based Asynchronous Pattern" topic.

Q: How would unit/integration test code coverage influence designs and usage of async/await?

A: Answered in webinar

Q: So if my UI uses await/async to call a WebAPI method, the method itself has to be async or else it will be blocking correct?

A: Answered in webinar

Q: I have a project that interacts with SharePoint 2010 object model, so bound to .NET 3.5. Any caveats when using TPL for 3.5?

A: .NET 3.5 is before the TPL was introduced (and well before async/await). There is an AsyncBridge project which attempts to back port the TPL and async support, but I haven't ever used it.

Q: Can I use Async and await inside a sandboxed CRM Dynamics plugin?

A: I don't know about Dynamics, sorry. But if they have support for .NET 4.5, I don't see why not.

Q: How can, for example, the DownloadAsync method be canceled in a proper way from another UI action?

A: Cancellation is done with the CancellationToken/CancellationTokenSource types in .NET. Usually, asynchronous methods just pass the CancellationToken through to whatever APIs they call. For more information, see the MSDN topics "Task-based Asynchronous Pattern" and "Cancellation in Managed Threads".

Q: How to call an async method from a synchronous method or controller?

A: Answered in webinar

Q: Is .NET 4.5.1 the minimum for async / await?

A: Answered in webinar

Q: How do we do exception handling inside the DownloadAsync function?

A: Answered in webinar

Q: Can you explain how we can perform unit testing using these new keywords?

A: Answered in webinar

Q: Is async/await useful for WPF and Windows Form?

A: Yes, async is useful in any UI scenario.

Q: For Task Parallel and async/await which one we should use?

A: The Task Parallel Library is great for CPU-bound code. Async is better for I/O-bound code.

Q: If you got an normal MVC controller that returns a standard view... If that view contains AJAX code to fetch data from an async (WebAPI) controller, would the calling thread be blocked while the AJAX call is running? We have a situation at work where we cant switch page before the AJAX call is done... which seems a bit weird to me.

A: Answered in webinar

Q: When building async controllers/methods, is there some way to tell that the code is actually running asynchronous? How can I tell that the code is non blocking?

A: Answered in webinar

Need Async Guidance?
CodeIt.Right includes extensive Async Best Practices rule set that will guide you through the intricacies of Async. Start a no-cost 14-day trial of CodeIt.Right, SubMain's code quality analysis, automated code review and refactoring for Visual Studio.
posted on Tuesday, February 24, 2015 5:20:00 PM (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
 Tuesday, January 6, 2015
Recording of the webcast, slides and demo code have been posted to the website - watch it here
Enjoy the recording, and please let us know how we can help!

Featuring Stephen Cleary, Microsoft MVP

  Date: Wednesday, January 14th, 2015
  Time: 10:00 am PST / 1:00 pm EST

Recording Available

Asynchronous code using the new async and await keywords seems to be everywhere these days! These keywords are transforming the way programs are written. Yet many developers feel unsure about Async programming.

Get demystified with Stephen Cleary, as he introduces the new keywords and describes how they work. Stephen is the author of "Concurrency in C# Cookbook" as well as several MSDN articles on asynchronous programming. Together, we'll cover:

  • How the async and await keywords really work
  • How to think about asynchronous code
  • The difference between asynchrony and parallelism
  • Common mistakes when learning asynchronous programming
  • Fixing Async code smells with CodeIt.Right

If this time isn't convenient for you, register and we will send you the recording afterwards.

Recording Available

posted on Tuesday, January 6, 2015 5:50:00 AM (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
 Thursday, December 4, 2014

First, we want to thank all of you for the support and loyalty you have given us over the last few years. We truly have the most amazing and passionate community of developers on the planet, and it makes our job an absolute joy. If you already have a GhostDoc Pro with License Protection, rejoice! The upcoming changes are not going to affect you (and if you were thinking about purchasing additional licenses, now is the time).

If you don't have GhostDoc Pro, this is your last chance to purchase it with the License Protection and receive free upgrades for the life of the product.

We are working harder than ever to bring more great features to your favorite plugin. We are super excited about the things we're working on and can't wait to share them with you over the next few months!

We will be making upgrade protection changes for new GhostDoc Pro users in order to align GhostDoc Pro maintenance with all other SubMain's products.

Starting January 1, 2015, for new license purchases only we are retiring the lifetime License Protection option for GhostDoc Pro and replacing it with annual Software Assurance subscription offering.

If you have been thinking buying new license(s) or adding more licenses, now is the time! Purchase GhostDoc Pro with License Protection by December 31, 2014 and save big on future GhostDoc Pro upgrades!

Purchase GhostDoc Pro w/ License Protection now

What is Software Assurance subscription?

SubMain customers can purchase 12 months of Software Assurance subscription with the purchase of any new license. Upgrade protection includes access to all major and minor version upgrades for 12 months from the date of purchase at no additional charge.

Upgrade Protection Timeline 2015

For example, if a new GhostDoc Pro license is purchased on May 1, 2015, upgrade protection will expire on April 30, 2016. During this time, the customer can download and install any minor version upgrades. In addition, if SubMain issues a major release of GhostDoc Pro during the subscription period, the license can be upgraded to the latest version at no additional charge. With SubMain's Software Assurance, customers will always have access to the latest features and fixes.

For more information please see Software Assurance - Renewal / Reinstatement

Again, please note that this new upgrade protection subscription will only affect new license purchases after January 1, 2015. All existing customer licenses with License Protection and those purchased by December 31st, 2014 will be honored and free upgrades will be provided to users with License Protection for the life of the product.

Thanks again for all of your support. Keep an eye out for more new exciting releases coming very soon!

[Edit: added some frequently asked questions]

Q: How does the Software Assurance subscription work for GhostDoc Pro?

A: It works the same way it does for all other SubMain products - the initial subscription term is one year from the purchase date. It is renewed at the end of the subscription for another year unless you choose to discontinue the subscription. If your license purchase has not included subscription auto-renewal, you need to renew your subscription manually in order to keep it current.

For more information please see Software Assurance - Renewal / Reinstatement

Q: I have purchased GhostDoc Pro without the License Protection. Can I add it now?

A: No, License Protection is not something that can be added after the license purchase.

Q: How long do I get updates if I don't purchase Software Assurance subscription?

A: With a new license purchase you get 90 days of free product updates if you have not purchased the Software Assurance subscription option.

Q: With License Protection do I get all future features or not?

A: Customers who purchased GhostDoc Pro with License Protection before it is replaced with the Software Assurance subscription get exactly the same features as the users with subscription. Think of the soon to be retired License Protection as a prepaid lifetime subscription.

posted on Thursday, December 4, 2014 11:40:31 AM (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
 Tuesday, October 28, 2014

A recording of the webcast and a copy of the slides have been posted to the web site - watch it here

Enjoy the recording, and please let us know how we can help!

Featuring Steve Smith - CTO, Falafel Software; Microsoft Regional Director; Microsoft MVP

  Date: Wednesday, November 12th, 2014
  Time: 10:00 am PST / 1:00 pm EST

Recording Available

Refactoring is a critical developer skill that helps keep code from collapsing under its own weight. Steve is the author of "Refactoring Fundamentals," available on Pluralsight, which covers the subject of code smells and refactoring in depth. This webinar will provide an introduction to the topics of code smells and refactoring, and should help you improve your existing code.

Join Steve Smith as he shows some common code issues, and how to identify and refactor them with SubMain's CodeIt.Right code quality tool. In this webcast Steve will cover:

  • What are Code Smells
  • Principle of Least Surprise
  • Rules of Simple Design
  • Explain code smells like, Long Method, Large Class, Primitive Obsession, Data Clumps, Poor Names, Inappropriate Abstraction Level and more
  • Demo using CodeIt.Right to find and resolve code issues

If this time isn't convenient for you, register and we will send you the recording afterwards.

Recording Available

posted on Tuesday, October 28, 2014 2:57:18 PM (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
 Thursday, September 25, 2014

CodeIt.Right v2.7 is a maintenance release that includes:

  • Support for VS2013 Update 3 and newer
  • Improved compatibility with Shared/Universal App projects
  • Exported Violation Report now includes profile name, severity threshold, version of CodeIt.Right and duration of the analysis
  • Exported Violation Report now includes information about Excluded Projects, Files, Rules and Violations
  • Command line version console output shows profile name as well as number of excluded projects, files, rules and violations
  • Other improvements and fixes

For detailed list please see What's New in CodeIt.Right v2.7

How do I try it?

Download v2.7 at http://submain.com/download/codeit.right/

posted on Thursday, September 25, 2014 5:03:00 AM (Pacific Standard Time, UTC-08:00)    #    Comments [0]   
 

 
     
 
Home |  Products |  Services |  Download |  Purchase |  Support |  Community |  About Us |