Tuesday, May 28, 2019

Be a Hero on Day 1 with ASP.Net Boilerplate

ASP.Net Boilerplate is a web framework you've likely never heard of, but probably should have. Fortunately, I just published a new episode of Code Hour where I give an overview of it.

In the episode I show how ASP.Net Boilerplate provides a huge leg up when starting new asp.net-based web projects. I show how it generates a fully functional site complete with authentication, authorization, multi-tenancy, auditing, and a ton of best practices like dependency injection, unit of work, and repository pattern.
While I only show the Angular front-end, I point out that there are four different front-ends to choose from including React, Angular, Vue, or ASP.Net MVC 5 with Razor Views.
If you've got a spare hour (or far less at chipmunk speed), watch and learn why I think it's awesome, and how it can make you a hero on day 1 of your next asp.net project.

p.s. don't hesitate to subscribe and like, it's good feng shui.



Wednesday, May 15, 2019

Code Sharing Part 2: Automatic Semantic Versioning of NuGet Deploys

How to automatically incorporate semantic version information into NuGet libraries during building, packaging, and publishing of .Net libraries to private NuGet feeds.

In my last code sharing post I covered how create and consume private NuGet feeds in Azure DevOps via a script, but only locally, and manually.  This isn't close to good enough assuming you've bought into the DevOps dream of consistently and reliably deploying code into production up to hundreds of times per day (ala The Phoenix Project, which I simply can't recommend enough).

To get a continuous integration server automatically compiling, packaging, and deploying a library to NuGet, you'll first need to solve versioning.  Ideally the versioning information will contain semantic information such as whether an API was broken.  That type of information can only be provided by a human, and yet a human shouldn't have to manually specify the version number for each build.

In this article I'll show how to incorporate semantic versioning information into libraries and yet automate it.  I'll start with incorporating versioning information into the Cake scripts from the last post and then get into version automation with GitVersion.

Hard Coded Versioning


There's a subtle bug with the NuGetPush task from my last post.  If you run it twice you'll get this error:

Response status code does not indicate success: 409 (Conflict - The feed already contains 'LibAuthenticator 1.0.0'.

That's because I didn't specify a version in dotnet pack.  Dotnet pack then tried to pull version information from the .csproj, and since no version information existed in csproj, it defaulted to 1.0.0 every time.  There are two ways to solve this.

We could put version information in the .csproj.  However, modifying a csproj during a build feels yucky, and locally would require a git revert.  Uch.

Better, the dotnet pack command documentation shows that we can manually override MSBuild properties with a syntax like "/p:Property=value".  Cake supports this inside the DotNetCorePackSettings via the strongly typed MSBuildSettings property:

Task("Pack") .IsDependentOn("Build") .Does(() => { DotNetCorePack(projectDirectory, new DotNetCorePackSettings { NoBuild = true, IncludeSymbols = true, Configuration = configuration, MSBuildSettings = new DotNetCoreMSBuildSettings().SetVersion(version) }); });
The file name when we pack will now contain the version, so we'll additionally need to update the name of the file we're pushing like:

var nupkgFile = configDirectory + File("LibAuthenticator." + version + ".nupkg");

and put that in the NuGetPush task.

Assembly Meta-Data


There's now a small problem.  If we override PackageVersion during pack and fail to specify the version during build, then the source library's dll meta-data will always be marked as 1.0.0, even though git pulls the updated version e.g. 1.0.1.    Fortunately we can pull that "/p:" trick again this time on dotnet build to override the version.  The code looks nearly identical:

Task("Build") .IsDependentOn("Clean") .IsDependentOn("Version") .Does(() => { DotNetCoreBuild(projectDirectory, new DotNetCoreBuildSettings { Configuration = configuration, MSBuildSettings = new DotNetCoreMSBuildSettings().SetVersion(version) }); });
Now we just need to define and set the version variable.  If we hard-code one like var version = "1.0.1"; toward the top of the file, we'll be able to NuGetPush up a new version of our library.  Once at least.  Then we'll run into the 409 error again.  We could make variable an argument, but we'd have to specify it on every build, and that's not very automated.  There's a better way.

Calculating Version


If you're using git, and gitflow in particular, then all of the information you need should already be in source control history.  This is especially easy for official releases.  According to gitflow author Vincent Driessen:

origin/master [should] be the main branch where the source code of HEAD always reflects a production-ready state … When the source code in the develop branch reaches a stable point and is ready to be released, all of the changes should be merged back into master somehow and then tagged with a release number.

So for official releases, you could just extract and parse the git tag. But suppose you want to publish to alpha or beta NuGet feeds from non-master branches. The good news is that if you're following git flow everything you need is still available in git, and even better you can use the GitVersion tool to extract it for you. GitVersion returns semantic versions that make sense. Definitely read the documentation, because it's a bit counter-intuitive at first and it's very customizable, but here's a quick summary of the defaults:



  1. If you tag a branch (typically on master for an official release) like v1.0.3, then GitVersion will return "1.0.3"
  2. If you're on a feature branch off of develop then GitVersion will increment the minor number and append a suffix of the branch name and a 1 up counter
  3. As you merge PR's back into develop GitVersion updates the minor version from what it found in master and appends "alpha" and a 1 up counter.
  4. When you're ready to deploy and start a release branch, GitVersion parses the version number out of the branch name and appends "beta" and a 1 up counter
  5. When you merge the release branch back into master GitVersion recognizes the version from the merged release branch even before you're tagged
If you're suspicious about strings in version numbers, they do work in assembly meta-data:


And when you publish to NuGet even without a separate alpha or beta feed they show up but only if you include prerelease versions:




Now there's just the issue of implementing it.


GitVersion and Cake


Using GitVersion in Cake is extremely easy. First include the plugin like this:

#tool "nuget:?package=GitVersion.CommandLine"
Specify a version there for bonus points. Then just set a version variable, something like:

Task("Version") .Does(() => { var symVer = GitVersion(); Information($"SemVer: {symVer.SemVer}"); version = symVer.SemVer; }); 
And finally include the new dependency:

Task("Build") .IsDependentOn("Clean") .IsDependentOn("Version") .Does(() => { ... }
To DevOps

And now you can get run this from an Azure DevOps Build via a yml step like:

steps: - powershell: .\build.ps1 -target=NuGetPush -nugetUsername='$(nugetUsername)' -nugetPassword='$(nugetPassword)'
The full version of this entire project is open sourced and the source code for the cake solution is located here.

HouseCleaning

As a quick aside since my last post DevOps changed the NuGetFeed url. If this happens to you or you ever need to manually remove or debug your NuGet credentials, they live in: %appData%\NuGet\NuGet.Config

Summary

In this post I've shown how to incorporate version info into NuGet libraries during building and packaging in Cake tasks. I've also shown how you can extract semantic versioning info from source control using GitVersion. I hope this helps you to automate the versioning and deploying of your shared .Net code. Please hit me up on twitter or the comments if this was helpful or you have any questions or suggestions.

Saturday, April 27, 2019

Share Code Like a Boss Part 1 - Private NuGet Feeds in Azure DevOps

If you've ever tried to share code like an API proxy, math calculations, or validation logic between multiple projects, you'll know there are many options.  In this post I'll summarize the most common, then dig into the most versatile for .Net projects: private NuGet feeds.  I'll cover how to:

  • Create private NuGet feeds in Azure DevOps
  • Package .Net Core libraries into NuGet packages
  • Publish NuGet packages to feeds via Cake (C# Make) including authentication
  • Consume NuGet packages from private NuGet feeds
I'll save some of the more interesting bits like automatic semantic versioning, and publishing to private NuGet feeds from an Azure DevOps Build, for a subsequent post.

Bro, Just Ctrl-C, Ctrl-V

Private NuGet feeds are pretty great for many code sharing problems, but they're a little tricky to set up.  Before jumping to the Cadillac solution, remember different problems require different approaches.

For instance, I love copy and pasting for simple code that's unlikely to change.  It's fast and easy of course, but over time the code can be customized on each project without any fear of accidentally breaking another project.  Obviously when the code requires a bug fix, update or new feature, then this approach falls apart fast.

Git SubModules seem nice at first because they're easy to set up, but if you've ever used them much you'll find they're kind of a pain to work with.  Also, they don't support versioning, particularly semantic versioning.  That means it's hard to know which version of shared code a particular client is running and thus what features or bug fixes it'll get by upgrading.
    Private NuGet Feeds are trickier to set up, but they offer an easy and powerful distribution model that clients are already familiar with, a sophisticated security model, and they offer semantic versioning so clients can know exactly what they've got and what they're getting.

    Create a Feed

    There are a variety of options for private NuGet feeds.  You could use MyGet, host your own feed, or use Azure DevOps.  If you're already using Azure DevOps for source control or continuous integration, then using it for NuGet feed keeps everything in one place, and makes authentication easy.  I'll show the Azure DevOps option in this post, just be aware the free tier limits NuGet feeds to 5 users.

    To get started click on "Artifacts" in the left-menu.



    Then New Feed


    Give it a name and permissions.  Click Create.  Then the "Connect to Feed" panel will provide all the details you'll need to push artifacts.


    Packing NuGet

    At this point you could follow the instructions and package and publish to the feed at the command line with a nuget.exe command.  However, a one-off command isn't repeatable, and can't be consumed by a continuous integration server.

    If you want the option of automating the building, packaging, and publishing of your library, then some kind of scripting is in order.  As my regular readers know I'm a big fan of Cake (C# Make) for devops solutions.  It:


    • Manages dependencies between tasks
    • Allows running or debugging locally
    • Is cross platform
    • Offers intellisense (in VS Code with the plugin)
    • Is source controlled
    • Supports migrating between any CI Servers


    To use it you'll need to first compile the library you plan to publish.  If you've watched my intro to Cake video then you'll be comfortable with the following script to clean and build your library:

    Task("Clean")
       .Does(() =>
    {
        CleanDirectory(buildOutputDirectory);
    });

    Task("Build")
       .IsDependentOn("Clean")
       .Does(() =>
    {
       DotNetCoreBuild(projectDirectory, new DotNetCoreBuildSettings {
          Configuration = configuration
       });

    });

    If that looks foreign, it's actually just a C# DSL with lambdas that declares two tasks Clean and Build.  Build is dependent on Clean, so if you run Build it will always Clean first.  Seems simple, but dependency management helps prevent devops spaghetti.

    What we need now is to package our library up into a nuget package.  Fortunately asp.net core makes this really simple with the "dotnet pack" command, and Cake exposes it with the DotNetCorePack command like this:

    Task("Pack")
       .IsDependentOn("Build")
       .Does(() =>
    {
       DotNetCorePack(projectDirectory, new DotNetCorePackSettings {
          NoBuild = true,
          IncludeSymbols = true,
          Configuration = configuration
       });

    });

    The NoBuild = true attribute is important because it keeps us from accidentally building multiple times.  Instead we let cake handle our dependencies.  We only ever want the "Build" task to perform builds.

    Running that task (e.g. .\build.ps1 -t Pack on Windows) should result in something like MyLib.1.0.0.nupkg being output to your bin\Release folder.  I'll cover how to update the version in the next post in the series.

    By the way, instead of packaging explicitly from the command line, another option is to check "Generate NuGet package on build" in your .csproj file for the Release configuration.  Personally I prefer to do it in cake so all my devops logic in one place, but either option is fine.

    Pushing Artifacts to Your Feed

    If you follow the instructions in "Connect to Feed" you'd next need to do something like:

    nuget.exe push -Source "MyNewFeed" -ApiKey AzureDevOps my_package.nupkg

    Switching that over to cake looks like this:

    const string nugetFeedUrl = "https://sirenofshame.pkgs.visualstudio.com/_packaging/CodeSharingFeed/nuget/v3/index.json";

    Task("NuGetPush")
       .IsDependentOn("Pack")
       .Does(() =>
    {
       NuGetPush(nupkgFile, new NuGetPushSettings {
          Source = nugetFeedUrl,
          ApiKey = "AzureDevOps",
       });
    });

    Easy enough.  Right up until you run it and get this error:

    Unable to load the service index for source https://sirenofshame.pkgs.visualstudio.com/_packaging/CodeSharingFeed/nuget/v3/index.json.
    Response status code does not indicate success: 401 (Unauthorized).

    Authenticating Your Feed

    The solution to authenticating is to create a Personal Access Token with Read and Write permissions for Packaging:


    Then in Cake ask for the username and password from the personal access token as parameters, add a new task to create a NuGet Source with those credentials, and set the new task as a dependency for NuGetPush:


    var nugetUsername = Argument<string>("nugetUsername", null);
    var nugetPassword = Argument<string>("nugetPassword", null);

    Task("AddNugetSource")
       .Does(() =>
    {
       if (!NuGetHasSource(nugetFeedUrl)) {
          NuGetAddSource("CodeSharingFeed", nugetFeedUrl, new NuGetSourcesSettings {
             UserName = nugetUsername,
             Password = nugetPassword
          });
       }
    });

    Task("NuGetPush")
       .IsDependentOn("AddNugetSource")
       .IsDependentOn("Pack")
       .Does(() => { ... }



    Now if you run that puppy like:

    .\build.ps1 -target=NuGetPush -nugetUsername=ShareCodeNuGetPackager -nugetPassword=MyPassword

    It should succeed and your feed page should look something like this:


    Pull Feed

    Now that you've published the feed you're ready to consume it.  You can do that in Visual Studio with 

    1. Tools -> Options
    2. Nuget Package Manager -> Package Sources
    3. +
    4. Add a Name and a Source

    Visual Studio will automatically handle Azure DevOps authentication using your signed in credentials.  Pretty nice.

    But, how does consuming that feed on a CI server work?  Not very well at first.  I'll cover how to overcome that hurdle that in my next post.

    Conclusion

    In this article I described the benefits of sharing code with private NuGet feeds, explained how to create feeds, showed how to build, package, authenticate, and push to feeds, and briefly covered how to consume them.

    In the next article I'll cover versioning, building and publishing the library on the build server, and consuming the nuget feed from a Continuous Integration server.

    I hope this was helpful.  If so please let me know in the comments or on twitter.

    Wednesday, January 2, 2019

    How to Make Printed Circuit Boards 101

    I just posted Episode 18 of Code Hour.  In this one I show how to build a Printed Circuit Board (PCB) from prototype, to design, to manufacture.


    I demonstrate how to populate a board with LED's, resistors and solder paste, and then reflow solder the components to make a finished board.

    Along the way I show a cool time-lapse video of solder paste condensing into its mercury-like liquid state.




    This is a very different code hour. Please write in in the comments or me hit up on twitter to let me know how it went and if he should do more like this or if I should really just stick to coding.




    Monday, November 5, 2018

    Code Coverage is Finally Easy in .Net Core

    A couple of months ago calculating code coverage on the command line was quite challenging in ASP.Net Core.  Fortunately, as of last month and Visual Studio 15.8, generating the metric is easy.

    Originally, this was the story of the pain involved me starting a new project recently based on Microsoft's new, modern, cross-platform tech stack.  I was going to explain the many steps I went through to calculate what seems like a fairly basic metric.

    But instead, I'm going to tell you how much happier your life can be.

    Background Story


    The tech stack on my new project looks like this:

    • ASP.Net Core
    • XUnit
    • Azure DevOps (VSTS)
    • Cake

    But wait, before I alienate my audience down to zero readers, let me back up a little, bear with me.

    If you read my last blog post on the importance of calculating code coverage from a team perspective you know what code coverage is and why I care so much.  And if you've read my previous articles (or videos) about why Cake is such a fantastic tool for automating a devops pipeline, then you know how I propose to approach the devops problem.  If not, let me tl;dr the Cake decision:

    • Vendor Neutral (could easily switch to Jenkins or Team City)
    • Task Dependency Management Support
    • Locally Runnable
    • Easily Debuggable
    • Version Controlled
    • C#

    'nuf said.  The rest of the technologies were project requirements, except XUnit, which came with ASP.Net Boilerplate (more on that in a future article), and I didn't care enough to change it.  Just replace "X" with "N" in the rest of this article if you're an NUnit fan, it'll be the same.

    Testing with ASP.Net Core




    I'm going to pick up this narrative after having installed cake and scripted the build task, something like this:

    Task("Build")
        .IsDependentOn("Restore")
        .Does(() =>
    {
        var settings = new DotNetCoreBuildSettings
            Configuration = configuration 
        };
        DotNetCoreBuild("./CakeCoverageTest.sln", settings);
    });


    At that point I was extremely happy to discover that because testing is a first-class citizen I could test it like this:

    Task("Test")
        .Description("Runs unit tests.")
        .IsDependentOn("Build")
        .Does(() =>
    {
        var testLocation = File("./CakeCoverageTest.Test/CakeCoverageTest.Test.csproj");
        var settings = new DotNetCoreTestSettings {
            NoBuild = true
        };
        DotNetCoreTest(testLocation, settings);

    });

    Which translates to:

    dotnet.exe test "CakeCoverageTest.Test/CakeCoverageTest.Test.csproj" --no-build

    It runs ASP.Net Core's built-in testing infrastructure, which natively recognizes XUnit and NUnit.

    The NoBuild = true is because otherwise the dotnet test command will try to build, and I like to let Cake handle my dependencies.  This prevents building multiple times if, for example a high level Deploy task is dependent on both Test and Package, and both of those are dependent on Build.



    In that case Cake is smart enough to only build once if I run Deploy.  This isn't really any different than any "AKE" tool like Make, Psake, Rake, or MS Build, but it's a nice benefit over bash or Powershell scripts where you'd have to use global state.  Task dependency management++

    Getting Coverage


    Before Visual Studio 15.8, getting coverage would have involved switching to the windows-only vstest.console.exe alternative and requiring Visual Studio Enterprise.  Then, it would require adding a test adapter, fixing a bug with the tool path, adding a referencing the Microsoft.CodeCoverage NuGet package, adding a Full line to the main .csproj, and me explaining about the new vs the old pdb (Program DataBase) debug information file.

    Fortunately, as of September, there is a new parameter to dotnet test: --collect "Code Coverage".  It's unfortunately still Windows-only, but they have removed the requirement for Visual Studio Enterprise.  Making it cross platform is on the radar, and may even be supported by the time you read this.

    Cake doesn't support the coverage arguments just yet, but with the flexibility of the ArgumentCustomization parameter there's a simple workaround:

    Task("Test")
        .Description("Runs unit tests.")
        .IsDependentOn("Build")
        .Does(() =>
    {
        var testLocation = File("./CakeCoverageTest.Test/CakeCoverageTest.Test.csproj");
        var settings = new DotNetCoreTestSettings {
            Configuration = configuration,
            NoBuild = true,
            ArgumentCustomization = args => args
                .Append("--collect").AppendQuoted("Code Coverage")
                .Append("--logger").Append("trx")
        };
        DotNetCoreTest(testLocation, settings);
    });


    That translates to

    dotnet test "CakeCoverageTest.Test/CakeCoverageTest.Test.csproj" --configuration Release --no-build --collect "Code Coverage" --logger trx

    And with a little luck it should output something like this:


    Starting test execution, please wait...
    Results File: C:\dev\Cake\CakeCoverageTest\CakeCoverageTest.Test\TestResults\Lee_LEE-XPS_2018-10-21_17_34_47.trx
     Attachments:
      C:\dev\Cake\CakeCoverageTest\CakeCoverageTest.Test\TestResults\3ba423f8-2f10-4bc2-8b53-b4fef907369e\Lee_LEE-XPS_2018-10-21.17_34_44.coverage
     Total tests: 1. Passed: 1. Failed: 0. Skipped: 0.
    Test Run Successful.


    How awesome is that?!  We're pretty much done.


    Publishing Coverage


    To view the coverage data inside of Visual Studio we sadly still need the Enterprise edition.  But regardless, we can use an Azure DevOps build definition task to pick up and publish the file.  First, the build task:



    There's a Cake task in the marketplace, but the built in Powershell task above works just fine too.

    Then all we need is to publish the test results:


    If we run that puppy we should get this:


    Check out that line "Code coverage succeeded" with the "50.00% lines covered" row right up front!  Like butter.

    Summary



    I originally set out to tell a story of woe, and pain, and gnashing of teeth.  Instead I'm happy to tell you what a wonderful world we now live in.  Calculating code coverage is now easy for ASP.Net Core.  Well done Microsoft.  Well done.

    Wednesday, July 18, 2018

    How to Increase Quality with a Code Coverage Hack

    Back of Watch Mechanism Close Up : Stock Photo




    In this post I'll summarize what code coverage is, how it can be abused, but also how it can be leveraged to gently increase design and architecture quality, reduce bug regressions, and provide verifiable documentation.  But first a short story:

    The Hawthorne Effect


    From 1924 to 1932, a Western Electric company called Hawthorne Works conducted productivity experiments on their workers.  The story goes like this:

    First, they increased lighting and observed that productivity went up.  Enthused, they increased lighting further and productivity went even higher

    Before reporting the fantastic news that increasing productivity would be cheap and easy, they tried decreasing lighting as a control.  To their horror instead of productivity decreasing as predicted it now soared!

    What was going on?!

    The conclusion they eventually reached was that lighting was completely unrelated to productivity and that the workers were more productive the more they felt they were being observed.  This psychological hack has been dubbed The Hawthorne Effect or the observer effect.

    Leverage A Psychological Hack


    If you're wondering what this has to do with software development, the Hawthorne Effect is a tool that can be leveraged by astute managers and team leads to gently increase quality on a team.  Instead of forcing unit testing on possibly reluctant team members, leads can regularly report on the one metric, and as with the Hawthorne Effect, teams will feel the effects of being observed and naturally want to increase their number.

    If it sounds too good to be true keep in mind this is obviously more relevant for newer or less mature teams than highly functioning ones.  Or, perhaps you doubt that quality will increase in conjunction with code coverage.  Before we can get there we should cover (see what I did there) what it is.

    What is Coverage?


    According to Wikipedia

    Coverage is a measure used to describe the degree to which the source code of a program is executed when a particular test suite runs.

    Or put another way it's a percentage that shows how many lines of production code have been touched by unit tests.  An example will help.

    Consider this method:

    public static string SayHello(string name)
    {
        if (string.IsNullOrEmpty(name))
        {
            return "Hello Friend";
        }
        else
        {
            return "Hello " + name;
        }
    }

    If you have just a single (XUnit) unit test like this:

    [Fact]
    public void Test1()
    {
        var actual = Class1.SayHello("Bob");
        Assert.Equal("Hello Bob", actual);
    }

    Then it will cover ever line of code except for the "Hello Friend" line.

    On C# based projects there's this amazing tool called NCrunch that runs tests continuously.  It calculates the SayHello method as five lines of code.  It shows covered lines with green dots and uncovered lines as white:



    Since four of those lines are touched by tests the result is a code coverage of 4/5 or 80%.



    As a quick aside I find continuous testing tools like NCrunch and its JavaScript cousin Wallaby.js to be extremely motivating -- fun even.  Doesn't that white dot just bug the OCD in you?  They're also a huge productivity enhancer thanks to their nearly instantaneous feedback.  And another bonus: they also report coverage statistics.  If you're looking to increase quality on a team consider continuous testing tools, they pay for themselves quickly.

    How to Cheat


    If you're concerned that sneaky developers will find some way to cheat the number, make themselves look good, and not increase quality at all, you're not entirely wrong.  As with any metric, coverage can be cheated, abused, and broken.  For one thing, I've known at least one developer (not me I swear) who wrote a unit test to use reflection to loop through every class and every property to ensure that setting the property and subsequently getting it resulted in the property that was set.  

    Was it a valuable test?  Debatable.  Did it increase code coverage significantly to make one team look better than the others to a naive observer?  Absolutely.  

    On the opposite side of the spectrum consider this code:

    public bool IsValid()
    {
        return Regex.IsMatch(Email,
            @"^(?("")("".+?(?<!\\)""@)|(([0-9a-z]((\.(?!\.))|" + 
            @"[-!#\$%&'\*\+/=\?\^`\{\}\|~\w])*)" +
            @"(?<=[0-9a-z])@))(?(\[)(\[(\d{1,3}\.){3}\d{1,3}\])|" + 
            @"(([0-9a-z][-0-9a-z]*[0-9a-z]*\.)" + 
            @"+[a-z0-9][\-a-z0-9]{0,22}[a-z0-9]))$");
    }

    A developer could get 100% code coverage for that method with a single short test.  Unfortunately, that one line method has an insane amount of complexity and should actually contain perhaps hundreds of tests, not one of which will increase the code coverage metric beyond the first. 

    The thing about cheating code coverage is that even if developers are doing it, they're still writing tests.  And as long as they they keep a continued focus on writing tests they'll necessarily begin to focus on writing code that's testable.  And as Michael Feathers, author of Legacy Code, points out in one of my favorite presentations The Deep Synergy between Testability and Good Design,

    Testable code is necessarily well designed

    Go watch the video if you don't believe me (or even if you do and haven't watched it yet). 





    The trick, however, is to keep the focus on code coverage over time, not just as a one time event.

    The Ideal Number


    In order to maintain a focus on code coverage over time, perhaps setting a target goal would be a good approach.  I'm usually a little surprised when I ask "What's the ideal coverage number?" at local user groups and routinely hear answers like 80%, 90%, or even 100%.  In my view the correct answer is "better than last sprint" -- or at least no worse.  Or in the immortal words of Scott Adams (creator of Dilbert): goals are for losers, systems are for winners.

    To that end, I love that tools like VSTS don't just report code coverage, they show a chart of it over time.  But while incorporating coverage in the continuous integration process is a great starting point, as it provides a single source of truth, great teams incorporate the number into other places.  

    I've written about the importance of a retrospective before, but I feel it's also the perfect venue to leverage the Hawthore Effect to bring up the topic of coverage on a recurring basis.  The retrospective can also be an excellent opportunity for positivity.  For instance, a code coverage of 0.02% may not sound great, but if coverage was 0.01% the prior sprint that could legitimately show up on a retrospective under "what we did well" as doubled the code coverage!.

    Summary


    Even if a team abuses the code coverage metric to some degree, a sustained interest in testing through ongoing reporting can gradually and incrementally allow a team to reap the benefits of unit testing.    As a team writes more tests their code will become more testable, their testable code will become more loosely coupled and better architected, their bugs will regress less often, they'll end up with verifiable documentation, and small refactorings will become more common because they are safer and easier.  In short the team will increase in maturity and their product will increase in quality.

    As always if you agree or disagree I'd love to hear about it in the comments or on twitter.