Using Microsoft Fakes to test legacy code
Recently I needed to do some maintenance work on some old code that was not written with unit testing in mind and had not been structured to enable testing, for example there was no inversion of control.
Ideally I would have restructured the code to isolate it from external dependencies however this would have required a large QA resource cost to perform regression testing on the restructured code, and there were no existing unit tests to help. Another option is to just add more code without any tests. I was unhappy writing untested code but there was not enough recourse available to restructure the code so I took the opportunity to explore using Microsoft Fakes to isolate and test legacy code.
The structure of the code was like this
public SoapServiceClient DataService { get { if (_dataService == null) { _dataService = new SoapServiceClient(); } return _dataService; } } public DataXml GetDataXml(DataRequest request) { DataXml data = new DataXml(); try { data = DataService.GetDataFromRequest(request); BusinessLogic(data); } catch (WebException ex) { data.SetCallStatus(ex); Logger.LogError(ex); } return data; }
Where the DataService is a SOAP service client that makes a network call and returns XML, the call to GetDataFromRequest sends an HTTP request to the service and gets its response back as XML. The objective here is to be able to write tests that will be able to explore the behaviour in the BusinessLogic() method, and also the exception processing mechanism. This code has been simplified from the real production code and restructuring it to use dependency injection of the DataService property would not be straightforward.
We can test this code using Microsoft Fakes without making any changes to this code. The tests were written in a separate assembly. The test assembly has a reference to Microsoft.QualityTools.Testing.Fakes which enables tests to be written.
Its worth noting that Fakes are only supported by VS 2012/2013 Ultimate and Premium, they are not available in Professional or Express editions. Though there is a uservoice page asking for it to be extended so you might want to vote the feature up. We right click on the assembly we want to generate fakes for, the one that contains the above code, and then we select Add Fakes Assembly. This only need to be done when we are writing tests the generated assembly does not need to be regenerated every time we want to run.
[Test] public void Will_return_an_error_in_xml_when_an_error_is_thrown_in_the_dataservice() { // screwy microsoft magic using (ShimsContext.Create()) { // arrange _request = new TestRequest(); _exceptionToThrow = new WebException("TEST",WebExceptionStatus.Timeout); ShimDataService(); // act DataXml xml = _serviceProxy.GetDataXml(_request); // assert xml.Should().NotBeNull("because the error should be handled"); xml.CallStatus.Should().Be(_c_error_with_service); } } private void ShimDataService() { // stop the proxy from being called - the proxy is in the global namespace Global.Fakes.ShimDataService.AllInstances.GetDataFromRequestRequest = (SoapServiceClient dataService, Request request) => { if (_exceptionToThrow != null) { throw _exceptionToThrow; } return _dataToReturn; }; }
The call to ShimsContext.Create invokes the Microsoft magic that gets the whole fakes mechanism to work. All tests must be placed in the using block for the magic to work.
The method ShimDataService then intercepts the call to the real DataService, so no HTTP requests are made, instead we either throw an exception or return specimen data. This enables us to be able to exercise the expected behaviours without the need for any external dependencies. The name of the method we intercept is GetDataFromRequestRequest which is the name of the method with the types of the parameters suffixed, this is so that we can distinguish between overloaded calls of the same name.
When building the test assembly it can be slow to generate fakes for all classes in the specified assembly to be faked. To overcome this I have configured the tests to only generate fakes for the classes we are interested in. We do this by editing the configuration file in the Fakes folder of the tests assembly. The file is called <DLLNAME>.fakes, where the DLLNAME is the name of the assembly that we added the fakes for. Mine looked a bit like this.
<Fakes xmlns="http://schemas.microsoft.com/fakes/2011/" Diagnostic="true"> <Assembly Name="MyProductionCode"/> <!-- only generate the fakes we need --> <StubGeneration> <Clear/> </StubGeneration> <ShimGeneration> <Clear/> <Add FullName="SoapServiceClient"/> <Add FullName="ServiceProxy"/> <Remove TypeName="IExceptionResponse" /> </ShimGeneration> </Fakes>
Within the ShimGeneration we use Clear to remove all the classes the fakes generator has reflected from the assembly, then we Add back in the ones we want to generate fakes for. We also need to remove any type3s that cannot be generated. The Diagnostic=true will help spot any generation errors.
In summary fakes can be great at testing production legacy code without refactoring. The main advantages are
- The production code does not need to be changed just so it can be tested (though we should consider restructuring code so it is easier to test but that can be done when we want to)
- Any funky code is isolated in the test assemblies
- We can test business logic piece by piece until we have more confidence in our code coverage. This will make any future refactoring or upgrading of environment much less painful
- We can write tests.
There are some disadvantages that we need to bear in mind.
- These tests must be run by the Microsoft test runner. We cannot use the NUnit /Resharper runner, though Fakes are now supported by NCrunch.
Though to be clear we can use NUnit/FluentAssertions testing framework but we must use the MS runner and the NUnit adapter. - Its only available in VS2012/2013 Ultimate and Premium editions.
- It could, in the wrong hands, encourage people to write tightly coupled code as it can be shimmed later. This technique should only be used with existing legacy code that would be too expensive in terms of regression testing to restructure, its not a device for writing new legacy code.