Running JS unit tests in Visual Studio and on TeamCity

Getting set up to run JS unit tests locally (via Visual Studio with Resharper):

  1. Download and install PhantomJS
  2. Stick the contents of the zip somewhere
  3. Configure Resharper: Options -> Tools -> Unit Testing -> JavaScript Tests Run Tests With: PhantomJS Set path to PhantomJs executable (to bin/phantomjs.exe under wherever you put PhantomJS)
  4. You can now right-click and run unit tests as you would for C# ones.

Configuration info

chutzpah.json configures running the tests on TeamCity and code coverage reporting. Any libraries (e.g. JQuery) you reference should be excluded from code coverage. All the test files in the tests project should also be excluded (the configured wildcard should be enough to do this).

TeamCity configuration

Tests are run via Chutzpah, this is configured as a nuget package for the solution and should be pulled down automatically.

Command line build step, Executable with parameters:

Command executable:\Source\packages\Chutzpah.4.0.3\tools\chutzpah.console.exe

Command parameters:\Source\Tests\UnitTests\UI\Website.JavaScript.Unit.Tests\ /teamcity /coverage

Test results show up in the normal way under the Tests tab. Code coverage report Under general settings add an artifact path for _Chutzpah.coverage.html – this will then show up under the build page as a tab called “JS Code Coverage”

8 more bytes (followup)

So it looks like with the release of OpenBSD 5.3 the vr(4) driver has added support for baby jumbo frames.

Baby jumbo frames supported in vr(4) and sis(4) useful for e.g. MPLS, vlan(4) tag stacking (QinQ) and RFC4638 pppoe(4)

This is excellent, means no more need to compile a custom kernel 🙂 Makes me wish I’d submitted a patch now…

8 more bytes!

I was recently fortunate enough to move into a place where I could get VDSL (BT’s FTTC product, resold through AAISP). This is a vast improvement over conventional ADSL in that more of the path between you and the Interwebs is fibre rather than copper.

The service is provided with a modem, which you use PPPoE to connect via to your ISP. This supports something called “baby jumbo frames” (RFC4638) which boosts the MTU for your PPP connection from the usual 1492 bytes to 1500 bytes. This means it can carry full-sized 1500 byte IP packets. This also means that the interface hosting the PPP connection needs to support an MTU of 1508 bytes (since PPP has an overhead of 8 bytes).

This is a good thing, and especially so in the IPv6 world where routers don’t fragment packets and you have to rely on ICMPv6 to negotiate end-to-end MTU (in a world where it’s not unknown of for people to disable this mechanism…)

I currently use an Alix 2d3 board running OpenBSD for my router. This is a great little machine with two 10/100 ethernet interfaces (vr(4)). Can easily cope with the 40Mb/10Mb FTTC service. I figured I’d try and get RFC4638 working on my connection.

To do this I needed to take care of two things.

1) Get RFC4638 support in pppoe(4)
2) Get the vr(4) interfaces to support 1508 byte MTU

#1 is easy, simply upgrade to OpenBSD 5.2

I did this by installing OpenBSD 5.2 onto a virtual machine, configuring it correctly and then copying the disk onto the CF card which the router uses as its’ “hard disk”. The latter involves formatting the CF card (fdisk -i, disklabel -e, newfs -O2 (for each disk)) and then using dump/restore to copy the filesystems from the virtual machine to the CF card. Finally following some of the instructions from the “Restoring from tape” section of the manual here to install the boot block.

#2 is a little less straightforward. The vr(4) driver does not support jumbo frames, the maximum MTU is 1500 bytes. We need 8 more bytes out of it. From reading around on the subject it looked like the NIC can cope with larger packets to do VLAN-related things (VLAN long frames support (1518+4bytes)).

So since I don’t care about VLAN support, why not try hacking the driver to see if I can simply boost the MTU and hope it works?

Well the changes are fairly easy to make, so I gave it a go and it appears to work perfectly.

Files to change are:


(Assuming you’ve got the kernel source in /usr/src/sys/)

First file you need to find the function vr_attach() and add:

ifp->if_hardmtu = 1508;

and comment out:

// ifp->if_capabilities |= IFCAP_VLAN_MTU;

(Just to hammer home the point that this is a hack and thus you probably shouldn’t try to use VLANs!)

Second file you need to change (this number may be a little high, leaving it at the previous default got me up to 1506 MTU fine but was unstable beyond that. Feel free to experiment with lower numbers!):

#define VR_RXLEN  1548

And that’s it! Compile a new kernel with these changes and boot from it. You’ll be able to set the MTU for your vr(4) connections to 1508 (I’ve actually tested this with higher MTUs, but since I have no need for that I figure 1508 is a sensible limit – there’s really little use for anything higher) and thus your pppoe(4) connection to 1500.

I’ve tested this fairly simply using “ping -Ds 1472 somehost.ontheinternet” (-D sets the don’t fragment bit, and 1472 is the highest payload you can cram into a single ICMP packet with a 1500 byte MTU maximum (1472 payload + 8 bytes ICMP header + 20 bytes IP header = 1500 bytes)). Examining the tcpdump capture (tcpdump -pi pppoe0 -w output.pcap) using Wireshark shows 1508 byte PPP frames containing 1500 byte IPv4 packets.

Further testing and stability analysis will come from using it. So far so good though.

So for anyone using OpenBSD on an Alix board with the VT6105M chip to connect to a BT FTTC service you can, fairly easily, have full 1500 byte MTU on your connection.

Copying files via PowerShell remoting channel

There are a few ways to do this, and in PowerShell 3.0 you can even just use the Copy-File cmdlet.

However, I came up with the following solution which is fairly reliable, and avoids any issues when transferring files larger than the session’s restriction on size of deserialised objects (by default 10MB).

Note that $localPath and $remotePath are set to what you’d expect. $Session is a PS Remoting Session created with, e.g. New-PSSession.

(ReportInfo and ReportError are just functions which output to either the console or to TeamCity depending on where the script is being run, this is part of our testing system…)

# Use .NET file handling for speed
$content = [Io.File]::ReadAllBytes( $localPath )
$contentsizeMB = $content.Count / 1MB + 1MB

ReportInfo "Copying $fileName from $localPath to $remotePath on $Connection.Name ..."

# Open local file
[IO.FileStream]$filestream = [IO.File]::OpenRead( $localPath )
ReportInfo "Opened local file for reading"
ReportError "Could not open local file $localPath because:" $_.Exception.ToString()
Return $false

# Open remote file
Invoke-Command -Session $Session -ScriptBlock {
[IO.FileStream]$filestream = [IO.File]::OpenWrite( $remFile )
} -ArgumentList $remotePath
ReportInfo "Opened remote file for writing"
ReportError "Could not open remote file $remotePath because:" $_.Exception.ToString()
Return $false

# Copy file in chunks
$chunksize = 1MB
[byte[]]$contentchunk = New-Object byte[] $chunksize
$bytesread = 0
while (($bytesread = $filestream.Read( $contentchunk, 0, $chunksize )) -ne 0)
$percent = $filestream.Position / $filestream.Length
ReportInfo ("Copying {0}, {1:P2} complete, sending {2} bytes" -f $fileName, $percent, bytesread)
Invoke-Command -Session $Session -ScriptBlock {
Param($data, $bytes)
$filestream.Write( $data, 0, $bytes )
} -ArgumentList $contentchunk,$bytesread
ReportError "Could not copy $fileName to $($Connection.Name) because:" $_.Exception.ToString()
Return $false

# Close remote file
Invoke-Command -Session $Session -ScriptBlock {
ReportInfo "Closed remote file, copy complete"
ReportError "Could not close remote file $remotePath because:" $_.Exception.ToString()
Return $false

# Close local file
ReportInfo "Closed local file, copy complete"
ReportError "Could not close local file $localPath because:" $_.Exception.ToString()
Return $false

The chunk size is set to 1MB as it seems a good compromise given the 10MB restriction. Why not just pass through the IO.FileStream object and perform the loop remotely? Well, I’ve had issues in the past with doing that as the remote end tends to dial back to the local end in order to interact with the object rather than using the existing TCP connection. Safer to just chunk the contents over.

Counting things in WMI

So, say you’re trying to get a count of the number of SMS_Package objects in the SCCM WMI interface, perhaps matching some parameter. You can easily do this using WQL and the SELECT COUNT(*) function, e.g.:


This can be executed on a WqlConnectionManager object’s QueryProcessor, via the ExecuteQuery method. The return from these is always an IResultObject (which is weird kind of object which can be both one or many objects at once – it wraps up other objects and presents a standard interface to permit you to enumerate them without being aware of their type, kind of like PSObject).

As detailed here in MSDN results from queries involving such WQL statements as COUNT come wrapped up in a __Generic class. This means in practice that the Count property (which contains the output of the COUNT(*)) is attached to the IResultObject’s first child.

So overall you get this:

    IResultObject packageWithName = Connection.QueryProcessor.ExecuteQuery(String.Format("SELECT COUNT(*) FROM SMS_Package WHERE Name='{0}'", PackageName));

int count = 0;

foreach (IResultObject collection in packageWithName)
count = collection["Count"].IntegerValue;

this.WriteDebug(String.Format("Count of existing packages is: {0}", count));

(This is code from inside a PSCmdlet derived class in case you’re wondering what this refers to).

The utility of this, of course, is to ensure that you don’t add more than one package with the same name since it should be unique. It’s slightly inelegant accessing the child by using foreach, but I haven’t worked out a better way to do it (documentation for IResultObject isn’t particularly great).

Exim4 – specify IP addresses for outgoing SMTP connections

One of the most basic anti-spam mechanisms employed by MTAs is to check that the reverse DNS records for the IP address of an incoming connection match the forward DNS records for the domain the connection is claiming to be from. This is a fairly basic way to check if a connection is coming from a properly configured mail server or from a spam zombie. A basic step to take when setting up your own MTA is to ensure that the reverse DNS records for the IP address it’s running on are published properly.

On machines with multiple IP addresses you may want to set up Exim to only listen on particular ones. E.g. a single IPv4 and IPv6 address. This can be useful if your mail server is also a web server, and has dozens of IP addresses. The only alternative is to publish reverse DNS records for every single IP address with the name of your server (which is no good if you want to run more than one mail server, but that’s a fairly niche thing to do).

You specify which addresses to listen on in your Exim configuration using the “local_interfaces” directive (on Debian, this is set in the “update-exim4.conf.conf” file with the “dc_local_interfaces” directive).

This only affects the listen addresses, the addresses used for sending outgoing mail are still picked by the system automatically. This has the undesired side effect of meaning your MTA might choose to send mail using an IP address which doesn’t have reverse DNS set up properly, and can lead to bounced mail (or a high spam ranking).

To fix this it’s necessary to modify the behavior of the SMTP transport. The Debian configuration for Exim comes with one remote SMTP transport by default, a line can be added as such to the template:

### transport/30_exim4-config_remote_smtp
# This transport is used for delivering messages over SMTP connections.

  debug_print = "T: remote_smtp for $local_part@$domain"
  driver = smtp
  interface = <;; 2001:470:1f09:398::1

(The <; changes the field separator from ":" to ";", which is needed when entering IPv6 addresses)

Obviously it’s better not to hard code this into the template file, so a custom debconf macro can be set up to allow the details to be entered via the config file if needed.

It’s also worth noting that you can specify different interface directives for different SMTP transports, potentially on a domain-by-domain basis. This could be used in a virtual email hosting situation for multiple domains hosted on different IP addresses. This would then give the impression that each domain had its own SMTP server as set up in DNS, providing for an easier transition if you wanted to move hosting to another box or provider.

SixOrNot linked from Mozilla blog entry

World IPv6 launch day got me a nice bump in the user count for SixOrNot, my IPv6 status indicator addon for Firefox. Looks like Mozilla linked to it from a blog entry they made which I imagine helped a lot with that!

Pushing up toward 2000 users now. It’s an incredible feeling seeing so many people using software I’ve written. Definitely time to work on some new features for it.

Facebook IPv6 live on main domain?


It appears that everyone’s favourite $100bn social network have enabled IPv6 on their main domain, jumping the gun on World IPv6 Launch Day.

This is rather good news, possibly the highest traffic site aside from Google to have done this so far.

World IPv6 launch – 6th June

I like to think of myself as an IPv6 expert, sadly it isn’t hard to be when so few people even know it exists!

I try and raise awareness through work, and through development of software which highlights IPv6. My Firefox extension Sixornot was one of the first to do this, and has inspired several others. I’m also using some of the unique features of v6 (multicast and scope identifiers) at work in building an innovative test management system.


I’m glad to see that (at least for me, using AAISP) Google has got most of the domains blogger uses IPv6 enabled. One of the cool features Sixornot has over its competitors is the ability to see whether each component of a remote website is being loaded using IPv6. Quite a lot of sites who claim to be IPv6 ready use CDNs or advertising networks which do not support the new protocol!

You can find more information on IPv6 via the launch day website – it’s well worth finding out about this crucial next-generation Internet technology. You’ll be one of a select few who know what they’re talking about!

Hyper-V and Azman for delegated VM access (using PowerShell!)

There’s an excellent article about delegating Hyper-V permissions using Azman (Authorization Manager) which has recently proven invaluable for me. We’ve been using VMM for a while, but the only real use case we have is to impose a simple segregation between our “production” development systems and our test systems (to avoid testers accidentally powering off the CI server, for example).

VMM is really overkill for this, and after upgrading to VMM 2012 I found that it no longer even managed to set permissions properly. (All our users hate the VMM self-service portal and want to use the Hyper-V MMC anyway…)

One part of the process detailed in that article which I wanted to improve on was the VBS scripts used to set the Scope for VMs. The permission model relies on assigning VMs to scopes (and then assigning users to those scopes with particular permissions profiles). The latter can be done with the Azman UI (or, I am sure, via scripting of some kind via WMI). The former can only be done via scripting. Since I do most of my Hyper-V management using PowerShell I wanted a simple solution to keep it all in one place.

So I wrote a simple PowerShell module with two methods, Get-VMScope and Set-VMScope. Get-VMScope lists the scope for the given VM (either pass a string with the name or a wildcard pattern, or pipe in an object with either a “VMName” or “ElementName” property – e.g. you can pipe in the VM objects which are returned by the psHyperV module. Set-VMScope takes a (single) VMName/ElementName and you set the Scope using the -Scope parameter.

No documentation (yet), but it’s fairly self-explanatory!

One issue I found while migrating from VMM 2012 to using this was that snapshots contain a scope property which will override the global one whenever the machine is reverted. This can be overcome by re-snapshotting, or manually editing the snapshot XML file, or running a script/task to set the scope whenever the machines are reverted automatically. This problem will gradually go away as the machines get rebuilt of course.