Saturday, January 24, 2015

Is Async/Await Multithreaded?

I got into an interesting argument conversation with a co worker last week about whether async/ await was multi-threaded.  He thought I was bonkers for suggesting it was not multi-threaded.  So I did some research and came up with the following response.

First off, obviously if you're if you're doing async/await it's probably because you want some multithreaded behavior like network IO or file IO where some other thread does some work for you while freeing your UI thread to handle UI stuff (or in the case of IIS, releasing your thread to handle other incoming requests, thus giving you better throughput).  So my co-worker was right that 99% of the time async/await will probably involve multiple threads.

However, if async/await were multi-threaded by it's very nature, then it should be impossible to write a program using async/await that was single-threaded.  So let's try to write a method that we can prove is single-threaded that also uses async/await.  How about this:

public async void HandleClickEvent()
{
    await Task.Yield();
    j = 1;
    while (j != 0)
    {
        if (j == -1) j++;
        j++;
    }
    await Task.Yield();
}

It took some work to come up with an infinite loop that looked normal to the compiler, but that's what the while loop is doing.  If async/await were multi-threaded, then we might think that the UI thread would hit the first Task.Yield and spawn off a new thread.  Then the infinite loop would be run on a new thread and the UI would work great, right?

If we actually run that code in a Windows Store app the UI freezes.  Why?  Because, according to MSDN:

The async and await keywords don't cause additional threads to be created. Async methods don't require multithreading because an async method doesn't run on its own thread. The method runs on the current synchronization context and uses time on the thread only when the method is active. You can use Task.Run to move CPU-bound work to a background thread, but a background thread doesn't help with a process that's just waiting for results to become available.
So when I claimed async/await wasn't multi-threaded I was thinking of that.  What's basically happening is that the UI thread is a message pump that processes events, and when you await within the UI thread's synchronization context you yield control to the UI thread's message pump, which allows it to process UI events and such.  When your awaited call returns it throws an event back to the UI thread's message pump and the UI thread gets back to your method when it's done with anything else it's working on.

But after some research I realized that I didn't know nearly enough about synchronization contexts and so I spent the morning reading about them.  After a lot of research I finally found someone that has a great description of how all this works under the covers and if you get the chance I highly recommend reading C# MVP Jerome Laban's awesome series C# 5.0 Async Tips and Tricks.

In particular one thing I learned is that if you start a new Task, you throw away the UI thread's synchronization context.  If you await when there is no synchronization context, then by default WinRt will give you some random thread from the thread pool, which may be different after each await.  In other words if you do this:

public async Task RefreshAvailableAssignments()
{
    await Task.Run(async () =>
    {
        Debug.WriteLine(Environment.CurrentManagedThreadId);
        await Task.Yield();
        Debug.WriteLine(Environment.CurrentManagedThreadId);
    });

}

You will (usually) get a different thread after the yield than you did before it.  That can lead to trouble if you aren't careful and aren't aware of it.  It can be especially dangerous if you're deep in the guts of something and you aren't 100% sure of whether you are being called from the UI thread or from some other thread.  It can be particularly bad if someone after you decides to put your code into a Task.Run and you were dependent upon the UI thread's synchronization context without being aware of it.  Nasty, huh?

It makes me like more and more the idea introduced in the post by Jason Gorman entitled Can Restrictive Coding Standards Make Us More Productive? where he describes ways of discouraging team members from starting new threads (or Tasks on my project since WinRt doesn't give us Thread's) unless there is a really good reason for doing so.

It goes back to a most excellent statement my co-worker made:
Async/await is very powerful, but we all knows what comes with great power.
So that was fun.  I look forward to having lots more constructive arguments conversations like this one in the future.  :)

Wednesday, January 7, 2015

The TFS 2013 + Git API Disaster

Don't get me wrong, the addition of Git to TFS is huge, and it actually removes all of my previous complains about the platform.  Sadly, the API's for it aren't up to par yet, the documentation is poor, and the technology is so young that the Internet is completely silent on how to programmatically accomplish just about anything with it.



So after about 24 total hours of wasting time searching the internet, decompiling source, watching network traffic on Fiddler, and triggering builds I have some working code (and a bit of a rant) that I wanted to share to help fill out the Internet (because clearly it doesn't contain enough ranting; but at least this one has working code).

API #Fail


As my regular readers no doubt know I occupy my spare time running Siren of Shame, a build monitor, USB siren, and CI gamification engine. The software needs to work such that when a continuous integration build is triggered it needs to determine which check-in triggered the build and give that user credit for the check-in (or exude a little light hearted shame on failure).

For every other major CI server in the world this is pretty easy.  TFS 2013 + Git?  Not so much.  If it worked the way it should you could simply do this:

var query = _buildServer.CreateBuildDetailSpec(buildDefinitionUris);
query.MaxBuildsPerDefinition = 1;
query.Status = Microsoft.TeamFoundation.Build.Client.BuildStatus.All;
query.QueryOrder = BuildQueryOrder.FinishTimeDescending;
// this gets changesets (TFVC) as well as commits (Git)
query.InformationTypes = new[] { "AssociatedChangeset", "AssociatedCommit" };

var buildQueryResult = _buildServer.QueryBuilds(query);

var buildDetail = buildQueryResult.Builds[0];

var commits = buildDetail.Information.GetNodesByType("AssociatedCommit");

And it wouldn't even require a second web request to get the triggering commit.

Sadly, the above only works for completed builds.  In-progress builds return nothing in AssociatedCommit().

That's the older, strongly typed API that requires referencing Microsoft.TeamFoundation.Build.Client.dll (which you can find in the GAC).  With TFS 2013, there is now also a TFS Web API.  Sadly even the equivalent new Web API methods have the same limitation. For example if build 5 were in progress then this:

GET http://myserver:8080/defaultcollection/project/_apis/build/builds/5/details?api-version=1.0&types=AssociatedCommit&types=AssociatedChangeset

Wouldn't return the associated commit until it completed.

So, for in-progress builds you're stuck doing a second query.

More API #Fail


Ideally at this point you would use the powerful and convenient QueryHistory() method.  Using it looks something like this:

var workspaceServerMappings = _buildDefinition.Workspace.Mappings
    .Where(m => m.MappingType != WorkspaceMappingType.Cloak)
    .Select(m => m.ServerItem);
var workspaceMappingServerUrl = workspaceMappingServerMappings[0];
// p.s. GetService() is a dumb way to get services, why not just make
//     it dynamic, it’s just as undiscoverable

var
versionControlServer = _tfsTeamProjectCollection.GetService<VersionControlServer>();
// notice the workspace server mapping url is a parameter. This facilitates onne web call
var
changesets = versionControlServer.QueryHistory(workspaceMappingServerUrl,
    version: VersionSpec.Latest,
    deletionId: 0,
    recursion: RecursionType.Full,
    user: null,
    versionFrom: null,
    versionTo: VersionSpec.Latest,
    maxCount: 1,
    includeChanges: true,
    slotMode: false,

    includeDownloadInfo: true);

Sadly this only works for changesets; in other words traditional Team Foundation Version Control (TFVC) checkins.  It doesn't work for Git, despite that what we want to accomplish is so very, very similar (i.e. couldn't we just throw in an overload that asks for the branch you're querying against?).

But Wait, There's More


As far as I can tell there is only one remaining option.  It's the new TFS Rest API.

There are two ways to use it.  The documentation says to use an HttpClient, but there's also a nice convenience wrapper that you can get by adding a reference to Microsoft.TeamFoundation.SourceControl.WebApi.dll, which you can find in the GAC.  Using this approach if you write something like this:

var vssCredentials = new VssCredentials(new WindowsCredential(_networkCredential));
GitHttpClient
client = new GitHttpClient(projectCollectionUri, vssCredentials)

// unnecessary web request #1: get the list of all repositories to get our repository id (guid)
var repositories = await client.GetRepositoriesAsync();

// sadly the workspace server mapping in the build definition barely resembles the repository Name, thus the EndsWith()
var
repository = repositories.FirstOrDefault(i => workspaceMappingServerUrl.EndsWith(i.Name));
var repositoryId = repository.Id;

// unnecessary web request #2: the workspace server mapping told us which server path triggered the build, but it #FAIL’ed to tell us which branch, so we have to scan them all!!!
var
branches = await client.GetBranchRefsAsync(repositoryId);

List<GitCommitRef> latestCommitForEachBranch = new List<GitCommitRef>();
foreach (var branchRef in branches)
{
    // branchRef.Name = e.g. 'refs/heads/master', but GetBranchStatisticsAsync() needs just 'master'
    var branchName = branchRef.Name.Split('/').Last();
    // Ack! Unnecessary web requests #3 through (number of branches + 2)!!!
    // p.s. repositoryId.ToString()? Can we please be consistent with data types!?
    var gitBranchStats = await client.GetBranchStatisticsAsync(repositoryId.ToString(), branchName);
    latestCommitForEachBranch.Add(gitBranchStats.Commit);
}

var lastCheckinAcrossAllBranches = latestCommitForEachBranch.Aggregate((i, j) => i.Author.Date > j.Author.Date ? i : j);

I've documented everything I hate about this in comments, but the most important point is this: The workspace mapping API for build definitions (which says which folder(s) trigger the build) fails to include a branch property.  This is true even for the Web API's.  For instance:

http://tfsserver:8080/tfs/DefaultCollection/_apis/build/definitions/1?api=1.0

Fails to tell us anything about the workspace mappings.  This API omission forces you to query all branches, which requests lots of web requests.  Specifically it requires the number of pushed branches plus two web requests in order to find the latest check-in across all branches.  This could be insanely expensive, and it might not even be correct in some circumstances.

Is There No Better Way?


As nice as the strongly typed API approach sounds, it turns out to be missing a number of API's that you can get to if you use a WebClient to request them manually.  Specifically if you use the web API directly you can issue a single request against the commits endpoint to get the latest commit across all branches.

Sadly, the authentication via WebClient is a bit tricky and is dependent upon whether you are using a locally hosted TFS or Visual Studio Online.  For this reason you're better off with some helper methods:

///

/// This method handles requests to the TFS api + authentication
///

public async Task ExecuteGetHttpClientRequest(string relativeUrl, Func<dynamic, T> action)
{
    using (var webClient = GetRestWebClient())
    {
        string fullUrl = Uri + relativeUrl;
        var resultString = await webClient.DownloadStringTaskAsync(fullUrl);
        dynamic deserializedResult = JsonConvert.DeserializeObject(resultString);
        return action(deserializedResult.value);
    }
}

public WebClient GetRestWebClient()
{
    var webClient = new WebClient();
    if (MyTfsServer.IsHostedTfs)
    {
        SetBasicAuthCredentials(webClient);
    }
    else
    {
        SetNetworkCredentials(webClient);
    }
    webClient.Headers.Add(HttpRequestHeader.ContentType, "application/json; charset=utf-8");
    return webClient;
}

///

/// Using basic auth via network headers should be unnecessary, but with hosted TFS the NetworkCredential method
/// just doesn't work.  Watch it in Fiddler and it just isn't adding the Authentication header at all.
///

///
private void SetBasicAuthCredentials(WebClient webClient)
{
    var authenticationHeader = GetBasicAuthHeader();
    webClient.Headers.Add(authenticationHeader);
}
public NameValueCollection GetBasicAuthHeader()
{
    const string userName = "username";
    const string password = "password";
    string usernamePassword = Convert.ToBase64String(System.Text.Encoding.ASCII.GetBytes(string.Format("{0}:{1}", userName, password)));
    return new NameValueCollection
    {
        {"Authorization", "basic" + usernamePassword}
    };
}
private void SetNetworkCredentials(WebClient webClient)
{
    var networkCredentials = new NetworkCredential("username", "password");
    webClient.UseDefaultCredentials = networkCredentials == null;
    if (networkCredentials != null)
    {
        webClient.Credentials = networkCredentials;
    }
}
Wow.  That's a lot of boilerplate setup code.  Now to actually use it to retrieve check-in information associated with a build:
// Get all repositories so we can find the id of the one that matches our workspace server mapping
var repositoryId = await _myTfsProject.ProjectCollection.ExecuteGetHttpClientRequest<Guid?>("/_apis/git/repositories", repositories =>
{
    foreach (var workspaceMappingServerUrl in workspaceMappingServerUrls)
    {
        foreach (var repository in repositories)
        {
            string repositoryName = repository.name;
            if (workspaceMappingServerUrl.EndsWith(repositoryName))
            {
                return repository.id;
            }
        }
    }
    return null;
});
// now get commits for the repository id we just retrieved.  This will get the most recent across all branches, which is usually good enough
var getCommitsUrl = "/_apis/git/repositories/" + repositoryId + "/commits?top=1";
var commit = await _myTfsProject.ProjectCollection.ExecuteGetHttpClientRequest(getCommitsUrl, commits =>
{
    var comment = commits[0].comment;
    var author = commits[0].author.name;
    return new CheckinInfo
    {
        Comment = comment,
        Committer = author
    };
});
return commit;
Is this absolutely terrible?  Perhaps not  But it is a lot of code to do something that used to be quite simple with TFVC and is quite simple with all other build servers (or at least those I have experience with, specifically: Hudson, Jenkins, Team City, Bamboo, CruiseControl, and Travis).

Summary

So that's my story and I'm sticking to it.  If any readers find a better approach please post in the comments, send me a note at @lprichar, issue a pull request against my CheckinInfoGetterService.cs where you can find the full source for this article, and/or comment on this SO article where I originally started this terrible journey.  Hopefully this will save someone else some time -- if not in the solution, perhaps in the following advice: if you value your time avoid the TFS Git API.

Thursday, October 2, 2014

Cowboys vs Perfectionists vs … Body Builders

I've worked with many developers over my fifteen years of professional software development.  Sometimes I just can't help but classify them.  But I bet you've noticed this too: there tends to be two main types.

Cowboys


If you've never heard of unit testing or avoid it because it slows you down, you're probably a cowboy.  Cowboys can prototype concepts to prove viability before you've finished describing the idea.  They can hit unrealistic deadlines and still have plenty of time to gold plate.  A cowboy is guaranteed to get you to market before your competition.  But God help you if you put their code in the hands of end users: you'll have more bugs than a bait shop.

Perfectionists


If your goal is 100% code coverage, you only code TDD, or you spend more time refactoring than producing customer-centric code, you might be a perfectionist.  Perfectionists write pristine, maintainable, refactorable, low-defect code.  Applications built by perfectionists will last the test of time.  The problem is it'll take them twice as long to get to production as it should and if their project doesn't run out of money first, their competition is liable to beat them to market, invalidating all that beautiful code.

Body Builders


Quick quiz: how do the most successful body builders maximize muscle mass while minimizing body fat?  Steroids of course.  Ok, how do they do it legally?  Maximizing muscle mass takes lots of lifting, lots of protein, and lots of calories.  Minimizing body fat requires cardio work and dieting.  But dieting and consuming calories are mutually exclusive.  So the answer is that body builders alternate between periods of cutting (dieting) and bulking (eating and lifting).  Neither activity alone will allow them to reach their goals.

Cowboy + Perfectionist = Cowfectionist?

So what do body builders have to do with software developers?  A fantastic software developer I once worked with used to say he felt software developers should be paid based on the lines of code they delete.  I always enjoyed his deleting code theory (clearly he was more of a perfectionist than a cowboy), but it struck me recently that deleting (refactoring) is to writing code as cutting is to bulking.  In other words if you try to do one exclusively, or (worse) both simultaneously, you're setting yourself up for failure.

Summary

We all have tendencies toward cowboys or perfectionists, but to become the most successful software developers our goal should be to alternate between cutting and bulking, refactoring and producing.  Obtaining a high muscle, low fat code base that gets to market on time with high maintainability requires neither a cowboy nor a perfectionist: it requires  avoiding extremism, embracing moderation, and recognizing the strengths of those different from ourselves.  If we can find that balance we can truly accomplish great things.

Tuesday, September 9, 2014

My Humbling Week as a Machinist

A terrible ripping noise followed by a clang and a bang echoed through the metal shop.  Everyone turned to look at me.

The bang could have been my last heart beat as I destroyed a machine that costs over $10,000.  Or it could have been a razor sharp bit cracked in half spinning through air and deflecting off my skull.  

In reality it was my the sound of the machine making an emergency stop as I hit the big red button reserved for emergencies.

Developer to Machinist

I got to this point with a goal, a chunk of time, insufficient respect for a new field, and perhaps a touch of hubris.

The goal seemed simple enough: build an aluminum mold for an injection molding machine to help mass produce a part I needed for my hobby.

I was to accomplish this goal at Techshop, a maker group that provides access to tools, information, and an awesome community.  And I was to accomplish it in a week during a vacation.

The general process I used was:

  1. Design the part in Autodesk Inventor
  2. Prototype on a 3D printer
  3. Design the mold in Autodesk Inventor
  4. Design the machining (CAM) operations in an Inventor add-on called Inventor HSM
  5. Export the CAM operations to GCode and import them into a TorMach CNC Milling Machine
  6. Run the cutting operations on a couple of pieces of aluminum (a top and bottom)
  7. Run the injection molder
Autodesk Inventor

Autodesk Inventor is an amazing CAD package that normally costs over $7,000, but as a TechShop member, not only do you get access to a fully stocked metal shop, and just about every piece of machinery imaginable, but you get a copy of Autodesk Inventor.  Fortunately the help documentation and tutorials are excellent and designing a simple part wasn't too hard.


From there I used a Makerbot Replicator 2 to 3D print a prototype and iterate a couple of times on the design.  The part, in case you're wondering will be a cap for the led column on a Siren of Shame, but it's really just an excuse for me to fiddle around and learn machining.




Mold Design

With a TechShop class on mold design I learned the key pieces of information I needed to make the mold including how to:

  • Create a sprue that the molten plastic will be injected into
  • Multiply a single part multiple times (and maintain the link to your part so you can theoretically change it)
  • Create a runner system to distribute molten plastic to the parts
  • Build gates to inject the molten plastic into the parts
  • Design vents that allow air, but not molten plastic to escape
  • Add cold slug wells to keep gasses and sediment out of your mold parts
  • Add registration pins that keep the two halves of the mold (cavity and core) aligned
  • Design your part to be easily injectible (add draft angles and fillets to every surface)


Inventor HSM

Inventor HSM is an amazing plug in for Autodesk Inventor that allows creation of 2.5D and 3D CAM operations for CNC milling machines directly from your Inventor models.  Unfortunately it's pretty new, the documentation is a little weak, and there aren't any TechShop classes on it yet.  Worse, if you don't know what you're doing many of the default cuts won't be correct, to potentially dangerous effect.  


Milling

Everything is so nice and easy right up until you have razor sharp cylindrical blades spinning up to 5,000 RPM and automatically moving along a path that you may or may not have defined correctly.

I ran my first cuts in foam to validate my approach, gain practice with the TorMach CNC milling machine, and get experience with the wide range of bits I would need (including traditional end mills for surfacing, ball end mills for runners, an end mill with a taper for draft angles, and a couple of micro bits for air channels and such).


Unfortunately, my confidence increased after I ran through my cuts in foam.  Foam, it turns out, is completely different from aluminum.  Here's a video of my first disaster.




The surfacing was fine, but I was taking too aggressive of cuts.  After a kind "Dream Consultant" at TechShop explained my mistake I was able to correct it like so (notice the difference in how the machine sounds):

 i

Unfortunately I didn't learn that lesson until after breaking a bit, lightly injuring a collet (bit holder), and embarrassing the heck out of myself in front of a small audience.

Lessons Learned

Throughout my milling time (30+ hours) I made numerous mistake.  Anyone with woodcutting experience (or machining common sense) will probably find these mistakes amusing.  I hope others may find this list constructive:


  • End mills only cut up through where the flutes end.  You can't run a mill into aluminum above that line without bad results.  And for goodness sake don't run the collet (the thing that holds the bit) into your aluminum or even worse things happen.
  • When bits break they go flying really fast and scare the crap out of you and it's loud and everyone in the room comes over to publicly shame you.
  • Even if you're cutting with the blades of your bit you should only mill a small amount of height of aluminum at a time (1/2 the bit width max) otherwise the drill makes a bad cut and terrible noises and you risk breaking the bit.
  • End mills (the ones with an L shape) aren't for drilling.  If you do this then miniscule aluminum shreds will bind to the bit and fling around and create an invisible shield that will block the flow of coolant generating smoke, risking overheating and damaging your bit and the aluminum.
  • Design everything in your part with specific bits in mind.  It's way too easy to design something on the computer that will be nearly impossible to machine.  Going back and modifying your part after you've designed your cuts in practice is extremely hard.
  • Getting your X, Y, and Z coordinates zero values correct on the machine is kinda important. If you get Z wrong, even a few hundredths of an inch too high for example, your cuts will be in air and you could waste a lot of machine time and not be able to tell because the area is flooded with coolant.


But as the kind staff reminded me, we are here to learn and we all have to start somewhere.

The good news is I never damaged the machine (it's actually quite forgiving), I only broke one bit, and most importantly I didn't (significantly) injure myself or anyone else.

Summary

In the end I successfully milled my pieces, finished them, ran a couple of runs on the injection molder (still some work to do there as you can see below), and ultimately accomplished my goal.



All in all trying on a completely different hat for a week was both fun and rewarding.  Building something real was a blast.  I learned a ton.  But to tell you the truth as thrilling as it was to accomplish my goal, I'm just as thrilled to be headed back to a job where a screw up won't injure anyone or damage a ten thousand dollar machine.  I'm happy to be headed back slightly smarter, happier and more humble.

Tuesday, July 15, 2014

Raspberry Pi Powered Siren of Shame via Node.js

I picked up a Raspberry Pi this week and had fun connecting it to a Siren of Shame.  Naturally the Model B+ with 4 USB ports came out the day after mine arrived.  Regardless, I had fun setting it up.  If you're interested in trying it too here's what to do.


Getting Started


You'll need a Siren of Shame device and a Raspberry Pi that's running and connected to the Internet.  Element14 has a great getting started set of videos if, like me, you're completely new to Raspberry Pi.  I used the Raspbian OS, but theoretically it shouldn't matter what OS you use.

libusb


Libusb provides an API for applications to interface with USB devices, including Human Interface Devices (HID) such as the Siren of Shame.  To install libusb use apt-get (the universal Linux installer).

If this is a new Raspberry Pi with a fresh install of Linux then you will need to update your list of available packages with

sudo apt-get update

Follow that up with:

sudo apt-get install libusb-dev

You should now be able to run lsusb from the command line to list devices.  Plug in a Siren of Shame, run lsusb, and you should get a device with an id of 16d0:0646 called GrauTec.  It should look like:

lsusb

...
Bus 001 Device 011: ID 16d0:0646 GrauTec

If your device doesn't show up, it could be an issue with the cable.  Andy Lowry, who has an excellent blog post where he lights up his siren of shame when freight trains are near, reports that he had to try several cables before finding one that worked.

Node.js


Thanks exclusively to Joe Ferner and his node-sos-device project we have a solution for connecting Siren of Shame's to linux using Node.js.  To install Node.js it should be as easy as:

sudo apt-get install nodejs
sudo apt-get install npm

Incidentally, rather than using node-sos-device directly, we will be using Joe's higher-level node-sos-client, which knows how to monitor Jenkins and Bamboo CI servers.

Node-sos-client


If you haven't configured your device to work with git you could do it the right way with SSH and generate an ssh key or you could just:

git clone https://github.com/AutomatedArchitecture/node-sos-client.git

and

cd node-sos-client

Next you'll need to download all Node dependencies.  If this is a fresh install you'll need to tell the node package manager (npm) where to retrieve dependencies from:

npm config set registry http://registry.npmjs.org/

Now you can install all dependencies for node-sos-client by running

npm install

Upgrading Node


For some fortunate users (Andy Lowry for one) installing node via apt-get works fine.  If, however, you get an error about node being out of date you'll have to uninstall, download, and update your path.

First, to uninstall the old version of node:

sudo apt-get remove npm
sudo apt-get remove node

No download and unpack:

cd ~
wget http://nodejs.org/dist/v0.10.2/node-v0.10.2-linux-arm-pi.tar.gz
tar -xvzf node-v0.10.2-linux-arm-pi.tar.gz

To add it to your path

nano .bashrc

And add the following two lines at the bottom:

NODE_JS_HOME=/home/pi/node-v0.10.2-linux-arm-pi
PATH=$PATH:$NODE_JS_HOME/bin

If you restart your command prompt and type node --version you should get v0.10.2.

Now retry npm install.

cd node-sos-client
npm install

And you should be good to go.

Running node-sos-client


First make a copy of the default configuration file:

cp config.json.example config.json

We'll configure it correctly later.  Next pick up the dependency on node-sos-device by running:

npm install sos-device

To run the app you should be able to run

sudo node build/sos-client.js

However, if you had to install node with the wget method, then you'll need to run

sudo $NODE_JS_HOME/bin/node build/sos-client.js

If you're lucky you'll see the app print out the device stats as json and a configuration error, something like:

deviceInfo: { version: 1,
  hardwareType: 1,
  hardwareVersion: 1,
  externalMemorySize: 0,
  audioMode: 0,
  audioPlayDuration: 0,
  ledMode: 0,
  ledPlayDuration: 0,
  ledPatterns:
   [ { id: 2, name: 'On/Off' },
     { id: 3, name: 'Fade' },
     { id: 4, name: 'Chase' },
     { id: 5, name: 'Fade Chase' } ],
  audioPatterns:
   [ { id: 1, name: 'Sad Trombone' },
     { id: 2, name: 'Ding!' },
     { id: 3, name: 'Plunk' } ] }
Failed to poll: bamboo0 { [Error: getaddrinfo ENOTFOUND] code: 'ENOTFOUND', errno: 'ENOTFOUND', syscall: 'getaddrinfo' }

However, if you have a cable that doesn't work well, or are connecting through a non-powered USB hub you may see:

Error: usb_detach_kernel_driver_np: -113 could not detach kernel driver from interface 0: No route to host

In this case try experimenting with the way you connect the device to the Pi.

Configuring the Connection

Today node-sos-client can connect to two CI servers: Bamboo, and Jenkins.  To connect to Jenkins update the config file to something like:

{
  "builds": [
    {
      "type": "jenkins",
      "config": {
        "url": "http://127.0.0.1/jenkins/api/json/",
        "username": "[username]",
        "password": "[password]"
      }
    }
  ]
}

And you're done.  With any luck running sudo node build/sos-client.js will light the siren and sound the speaker on initial connection, and whenever the build breaks.

Summary


I hope you've enjoyed and are now on your way to terrorizing your build breaking colleagues, even when you're not at the office.  Enjoy!