Monday 4 November 2013

Abstractions

After reading this interesting article: Abstracting away and Abstracting into it made me think of an argument discussion with a friend regarding this very topic. I was under the belief that when I was implementing logging for my application I would be best off to create my own ILog interface. And then creating wrappers that would encapsulate either Log4Net or NLog. My motivation was that Log4Net for example provides 50+ methods that I didn't need and that I wanted something for my application with just 3 methods:
LogMessage, LogWarning, LogError
I also wanted something easy to mock and unit test. The fact that this is logging is interesting because it is so ubiquitous, meaning whichever way I go could by coupled to nearly every single class.

One way we can address the problem by merely keeping the usage on the outskirts of the application. A good example of this are IOC containers. It doesn't really matter which library we use as the client code will not use it directly. This might seem impossible with the logging example but there are techniques such as creating logging aspects (AOP) but we won't go into that here.

In hindsight I feel that I was wrong about some things but right about some other. I didn't need to abstract Log4Net due to testing needs. Log4Net does already have an ILog interface and this does make it mockable and testable. And even though it has 50+ methods to mock do you even need to bother? In a unit testing scenario your config is set up to just print to the console, no mocking needed. However addressing the 2nd problem of swapping our libraries, do you even need to? Even if you have a project that you plan will live for 5-10+ years Log4Net will probably out live it. Also this particular example the library is proven and well established.

Also by creating my own interface I really thought that this is all I would need now, but in future I end up needing more, and in time I will just duplicate the effort of the library I am abstracting. This will unfortunately create the same inevitability which is the more abstractions I leak the more I am implicitly tying myself to that library anyway and ultimately I will introduce a piece of functionality that only the one library will support and switching will be impossible.

Of course I have seen many times where companies and projects have heavily embraced and invested in library X and then due to fading support or features were forced to move to Y and requiring large portions of the code base to be re written. In the cases I mention even if we created an abstraction to either X or Y there were such fundamental differences between what these graphics libraries supported from each other that a common abstraction wasn't possible and would more than likely just water down our interfaces leaving very little point of using either of them.

In this case though it is important that we use certain patterns for development for instance if we were using a graphics library and using the MVC or MVP pattern, in this unlikelihood we certainly could just replace our views and keep all business logic in tact. Also patterns like AOP mentioned above.

So as long as the library provides ways to mock and unit test and unless you are using a new very experimental library you are probably better to abstract into.

This is my honest advice for best business value, of course if I was writing something for myself I would certainly add layers and abstractions, because its more fun, but that's another story...

Tuesday 21 May 2013

AutoFac Dynamic Factories

AutoFac like many Ioc containers make it easy to declare your dependencies and just get them magically injected for you. The problem comes in when trying to control the lifetime of certain objects, especially if you are trying to create some shorter lifetime objects. Once you declare your dependency you don't have any more control over the lifetime of it. Also sometimes you might specifically need 2 different instances of an object. AutoFac just like many other containers has lifetime options when registering objects:
builder.RegisterType<TestClass1>().InstancePerDependency();
builder.RegisterType<TestClass2>().InstancePerLifetimeScope();
The first indicates that each dependency or call to the Resolve() method will give you a new instance. The 2nd indicates that each dependency or call to the Resolve() within the same lifetime scope will give you a new instance. This is sort of special as it actually shares the instance in the same call graph, lifetime scope or call to container.BeginLifetimeScope(). So this registration of specific to when sharing instances is preferred.

So even if you change all your objects to InstancePerDependency lifetime (which would be hugely limiting anyway) you still need a way to create these on-the-fly instances. The AutoFac solution is to resolve an object of type Owned<T>

However this means having the reference to the IContainer in your objects which is very bad and IOC 101 dont's. So just create a simple interface for your framework that you can use to inject abstract factories instead something like:

public interface IFactory<T>
{
    T Create();
}

This is very simple and easy to generate mocks for testing. You can also easily create a generic wrapper using lambda expressions to generate on the fly even when not using AutoFac or a different container. But for the autofac version, you can call it anything you like AutoFacFactory if you like, but it will function something like this:

public class Factory<T> : IFactory<T>
{
 private IContainer _container;

 public Factory(IContainer container)
 {
  _container = container;
 }

 public T Create()
 {
  var owned = _container.Resolve<Owned<T>>();
  return owned.Value;
 }
}
In order to get this to work there's a few simple steps to configure AutoFac:

First register the open generic type:

var builder = new ContainerBuilder();
builder.RegisterGeneric(typeof(Factory<>)).As(typeof(IFactory<>)).InstancePerDependency();

Next is that you have to actually register the container instance itself back to the container, as it seems its not automatic. This can actually be done easily but you need to construct a builder to update the container:

var postContainerBuilder = new ContainerBuilder();
postContainerBuilder.Register(c => container);
postContainerBuilder.Update(container);

Now you can declare dependencies to IFactory of any type that is registered and be able to create unique instances each time:

var builder = new ContainerBuilder();
builder.RegisterType<TestClass1>().InstancePerDependency();
builder.RegisterType<TestClass2>().InstancePerLifetimeScope();
builder.RegisterGeneric(typeof(Factory<>)).As(typeof(IFactory<>)).InstancePerDependency();

var container = builder.Build();

var postContainerBuilder = new ContainerBuilder();
postContainerBuilder.Register(c => container);
postContainerBuilder.Update(container);

var factory1 = container.Resolve<IFactory<TestClass1>>();
var factory2 = container.Resolve<IFactory<TestClass2>>();

var t11 = factory1.Create();
var t12 = factory1.Create();

Assert.AreNotEqual(t11, t12);

var t21 = factory2.Create();
var t22 = factory2.Create();

Assert.AreNotEqual(t21, t22);

Tuesday 22 January 2013

PowerShell Pitfalls (Non-Fatal Exceptions, Test-Path, Drives, Jobs)

So if you are trying to use PowerShell for some significant real stuff you will run into many pitfalls just like me. So much so I could dedicate a few blog posts just to get some functions out, but in trying to mount a TrueCrypt drive and checking its existence afterwards I ran into many annoying problems. So in C# I would just use Directory.Exists() and be done with it. But the PowerShell way is Test-Path. You can still use the C# version [System.IO.Directory]::Exists() but since using Test-Path and wanting to be more PowerShelly I stuck to it and forgot about Directory.Exists() I mean its the same thing anyway, right? No way! It seems like there are some other checks which I'm not sure what perhaps permissions etc the function tests for both existence and accessibility (permissions). All i really wanted to test was if the drive exists. So one other quick way is to useGet-ChildItem "C:\" and wrap this in a try except and check for exception of type System.Management.Automation.DriveNotFoundException

So much like:


function DriveExists($path)
{ 
 try
 {

  Get-ChildItem $path | Out-Null
 }
 catch
 {
  if ($_.Exception.gettype() -eq [System.Type]::GetType("System.Management.Automation.DriveNotFoundException"))
  {
   Write-Host "FAIL!"
   Write-Host $Error[0].exception
   return $false   
  }  
 }
 return $true
}

But this doesn't work either because the catch block is never executed, WTF? So it turns out that in powershell only fatal exceptions will invoke the catch block: To get around this you have to check the return value from the last command as suggested and then check the error:
   function DriveExists($path)
{ 
 Get-ChildItem $path -ErrorAction SilentlyContinue | Out-Null
 if (-not $?) # http://www.neolisk.com/techblog/powershell-specialcharactersandtokens
 {
  if ($Error[0].exception.gettype() -eq [System.Type]::GetType("System.Management.Automation.DriveNotFoundException"))
  {
   return $false   
  }    
 }  

 return $true
}
Now that I have this handy function again i found that for some reason a new drive mounted while executing the script cannot be seen, again WTF? It's as if it is cached in the session and can't be accessed. Then I think hey let me just test the existence by using Directory.Exists() and what do you know it works. In order to get Test-Path to work I had to run this out of process in a new session using a job, so something like this:
function DriveExistUsingJob($path)
{
 $ScriptBlock = {  Write-Output (Test-Path $args[0]) }
 $Job = Start-Job -ScriptBlock $ScriptBlock -ArgumentList @($path)
 Wait-Job $job | Out-Null
 $jobResult = [string](Receive-Job -Job $Job )
 while ($job.HasMoreData)
 {
  $jobResult = [string](Receive-Job -Job $Job)
 }
 
 return $jobResult
}
So why did I use PowerShell's Test-Path function one in the first place? I don't know I think I'll stick to the C# functions where I can, at least I know what that does. Remember: All I really wanted to do was to say Directory.Exists(), but no... It really makes me feel like Windows Powers Hell or at least mine.