Design For Testability

Design for testability

 

How can we recognize “Bad Design”? One could argue that a design can be qualified as “Bad” when it can’t be changed easily. When one small change in the code implies to make a lot of other changes or when the program break in many places when a single change is introduced. Because the parts are highly dependent on each other no effort will be invested in separating the modules that can be reused. Therefore a badly designed application will also tend to not be reused. The root of these symptoms is caused by the interdependencies between the building blocks of the application. The art of good design is to break these dependencies.

 

An interesting fact is that code that is interdependent is also difficult to test. That’s why our unit test are the most effective way to evaluate our design. Well designed applications made of loosely coupled parts will be easy to unit test. The opposite is also true, code that is easy to unit test is code that is generally well designed.  Unit testing is not about detecting existing defects but it’s an act of design. Unit tests help us in defining and evaluate our design. A piece of code that is difficult to test is a smell of bad design. When our code is difficult to test we should not try to write a test for it, we should first change it so it can be easily tested.

“If the answer is not obvious, or it looks like the test would be ugly or hard to write, then take that as a warning signal. Your design probably needs to be modified; change things around until the code is easy to test, and your design will end up being far better for the effort.” [Hunt, Thomas. Pragmatic Unit Testing in Java with JUnit.

In this series I describe what make our code hard to test therefore I provide several typical design anti-patterns  that makes our code hard to test and explain how these can be fixed. I provide also some patterns that can be applied in our SUT that can facilitate testing and enforce loose coupling.

 

Anti-patterns

 

1) The new keyword is used to construct anything you could replace with a test-double.

The most common cause making our test hard to test is violating the single responsibility principle. Look at the example here beneath, the SUT is responsible to construct his own collaborators. When your class has to instantiate and initialize its collaborators, the result tends to be an inflexible and prematurely coupled design. Such classes shut off the ability to inject test collaborators when testing. Do not create collaborators in your constructor or methods, but pass them in. (Don’t look for things! Ask for things!)

   1:  public class Invoice
   2:  {
   3:      private int _balance;
   4:   
   5:      public Invoice(int clientID)
   6:      {
   7:          DataLayer _db = new DataLayer();
   8:   
   9:          MeteringValues[] dailyValues = _db.GetMeteringValues(clientID);
  10:          int offPeakPrice = _db.GetOffPeakPrice();
  11:          int peakPrice = _db.GetPeakPrice();
  12:          int peakConsumption = CalculatePeakConsumtion(dailyValues);
  13:          int offPeakConsumtion = CalculateOffPeakConsumtion(dailyValues);
  14:          int advances = _db.GetAdvances(clientID);
  15:   
  16:          _balance = (peakConsumption * peakPrice +   
  17:                      offPeakConsumtion * offPeakPrice) – 
  18:                      advances;
  19:      }
  20:   
  21:      public int Balance
  22:      {
  23:          get { return _balance; }
  24:      }  
  25:      
  26:      private int CalculateOffPeakConsumtion(MeteringValues[] values)
  27:      {
  28:  
  29:      }
  30:   
  31:      private int CalculatePeakConsumtion(MeteringValues[] values)
  32:      {
  33:  
  34:      }
  35:  }

 

In the example here above, we can never replace the _db field with a test-double.  It’s true that the Invoice is easy to instantiate but this come at the cost of flexibility.  Because the DataLayer represents something expensive to access it is also not very testable .  To be able to inject a stubbed DataLayer into the Invoice we add a dataLayer parameter to the constructor:

   1:  public class Invoice
   2:      {
   3:          private int _balance;
   4:   
   5:          public Invoice(int clientID, DataLayer db)
   6:          {
   7:              MeteringValues[] dailyValues = db.GetMeteringValues(clientID);
   8:              int offPeakPrice = db.GetOffPeakPrice();
   9:              int peakPrice = db.GetPeakPrice();
  10:              int peakConsumption = CalculatePeakConsumtion(dailyValues);
  11:              int offPeakConsumtion = CalculateOffPeakConsumtion(dailyValues);
  12:              int advances = db.GetAdvances(clientID);
  13:   
  14:              _balance = CalculateBalance(
  15:                              peakConsumption, 
  16:                              peakPrice,  
  17:                              offPeakConsumtion, 
  18:                              offPeakPrice, 
  19:                              advances
  20:                         );
  21:          }
  22:   
  23:          protected int CalculateBalance(int peakConsumption, int peakPrice, int offPeakConsumtion, int offPeakPrice, int advances)
  24:          {
  25:              return (peakConsumption * peakPrice + 
  26:                      offPeakConsumtion * offPeakPrice) – 
  27:                      advances;
  28:          }
  29:   
  30:          protected int CalculateOffPeakConsumtion(MeteringValues[] values)
  31:          {
  32:  
  33:          }
  34:   
  35:          protected int CalculatePeakConsumtion(MeteringValues[] values)
  36:          {
  37:  
  38:          }
  39:   
  40:          public int Balance
  41:          {
  42:              get { return _balance; }
  43:          }
  44:      }

 

Here we dispose of a SUT that is a lot more testable because we are now able to inject a test doubles into the Invoice .  These test doubles can simply be a subtype based on DataLayer that returns default hard coded values.  A more advance technique is making use of a mocking framework to generate a stub/mock from the DataLayer class ->

 

 

  1: public void Construct_WithSampleValues_BalanceEqualsSampleBalance()
  2: {
  3:     //Arrange
  4:     var dalStub = MockRepository.GenerateStub<DataLayer>();
  5:     dalStub.Stub(m => m.GetMeteringValues(SampleClientID)).Return(SampleMeteringValues);
  6:     dalStub.Stub(m => m.GetOffPeakPrice()).Return(SampleOffPeakPrice);
  7:     dalStub.Stub(m => m.GetPeakPrice()).Return(SamplePeakPrice);
  8:     dalStub.Stub(m => m.GetAdvances(SampleClientID)).Return(SampleAdvances);
  9: 
 10:     //Act
 11:     var subject = new Invoice(1, dalStub);
 12:     
 13:     //Assert
 14:     
 15:     Assert.AreEqual(SampleBalance, subject.Balance);
 16: }
 17: 

 

WCF4 error message: server did not provide a meaningful reply

When setting up a WCF service with .Net4 using WCF I encountered the following error when passing large object graphs:
“The server did not provide a meaningful reply; this might be caused by a contract mismatch, a premature session shutdown or an internal server error.”

Obviously this error was due to the very minimal default quotas defined by WCF – I thought  I just had to increase the values in the config file to solve my problem. So I immediately went to the app.config file but I was rather surprised discovering that the configuration file was empty.

When inspecting the new features of WCF4 I discovered that Microsoft has put efforts to make the overall WCF experience just as easy as ASMX (this is at least what they claim) . Therefore WCF4 comes with a new “default configuration” model. In my opinion this default configuration scheme only obfuscates the inherent complexity of WCF4 and result is just more confusion.

Of course now your config file is empty but this does not simplify its use because the standard binding & behaviors quota’s are still targeted to minimal values. As soon as you try to do some real work with WCF you will get the error message described here above. This is error message mostly mean that you’ve to increase some of the default configuration values.

Here is an article describing the new configuration model of WCF4: http://msdn.microsoft.com/en-us/library/ee354381.aspx

This is the configuration (using very permissive values) I use when developing WCF3 (to be adapted when you go in production)

Client:

  <system.serviceModel>
    <bindings>
      <basicHttpBinding>
        <binding name="BasicHttpBinding_IGroupService" closeTimeout="00:01:00"
            openTimeout="00:01:00" receiveTimeout="00:30:00" sendTimeout="00:01:00"
            allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard"
            maxBufferSize="655360000"
                 maxReceivedMessageSize="655360000"
            maxBufferPoolSize="524288"
            messageEncoding="Text" textEncoding="utf-8" transferMode="Buffered"
            useDefaultWebProxy="true">
          <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384"
              maxBytesPerRead="4096" maxNameTableCharCount="16384" />
          <security mode="None">
            <transport clientCredentialType="None" proxyCredentialType="None"
                realm="" />
            <message clientCredentialType="UserName" algorithmSuite="Default" />
          </security>
        </binding>
      </basicHttpBinding>
    </bindings>
    <behaviors>
      <endpointBehaviors>
        <behavior name="largeObjectGraphBehavior">
          <dataContractSerializer maxItemsInObjectGraph="214748364" />
        </behavior>
      </endpointBehaviors>
    </behaviors>
    <client>
      <endpoint address="http://localhost:1763/GroupService.svc" binding="basicHttpBinding"
          bindingConfiguration="BasicHttpBinding_IGroupService" contract="GroupService.IGroupService"
          name="BasicHttpBinding_IGroupService" behaviorConfiguration="largeObjectGraphBehavior" />
    </client>
  </system.serviceModel>

Server:

<system.serviceModel>
    <bindings>
      <basicHttpBinding>
        <binding maxBufferSize="655360000" maxReceivedMessageSize="655360000" >
          <readerQuotas maxArrayLength="1000000" />
        </binding>
      </basicHttpBinding>
<!-- notice there’s no name attribute -->
    </bindings>
    <behaviors>
      <serviceBehaviors>
        <behavior>
          <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment -->
          <serviceMetadata httpGetEnabled="true" />
          <!-- To receive exception details in faults for debugging purposes, set the value below to true.  Set to false before deployment to avoid disclosing exception information -->
          <serviceDebug includeExceptionDetailInFaults="true" />
          <dataContractSerializer
                 maxItemsInObjectGraph="1000000" />

        </behavior>
      </serviceBehaviors>
    </behaviors>
    <serviceHostingEnvironment multipleSiteBindingsEnabled="true" />
  </system.serviceModel>

Why should we test?

I plan to write some articles regarding how to write good unit tests but before providing what my guidelines are regarding unit testing it’s important to first understand what are the reasons why we want to unit test and what are the drivers for depicting these guidelines. Because my objective is to encourage developers to practice unit testing, I choose to address the main oppositions I encounter today on the field.

 

Unit testing is not productive!

If we take a closer look to how productivity is optimized in classical manufacturing process, we can envision why well written, well maintained unit tests have exactly the opposite effect.

The productivity of a factory is measured by the speed at which products flaws from the production line and the effectiveness of the production line. As one may think, the speed at which products flaws out of the factory is not the average speed of each part of the production line but it depend mostly on of the number of things to process in the production line. So if you want to increase the overall production capacity of a factory you’ve to synchronize every part of the production line and make sure that each part works at a constant peace and that every part is working at a sustainable piece for him and his neighbor. The worsted thing that can happen in a production line is that a defect is caused in a part of the line and is paced at the next part. The defect will not only cause the part of the line where the defect is detected to stop but the defect part has also to be resent to the part of the line where the defect was made. This will desynchronize the overall production line and diminish the overall productivity of the factory. When we produce software the same is true, if a bug is detected by the testing team or worse in production it will generate a lot of waste. The bug will need to be described precisely; the developers will need to switch from their ongoing task to the bug resolution. A lot of time will be loosed in understanding the problem. The bug resolution will need to be tested. Finally a patch will need to be deployed in production potentially causing a service interruption. There is also a consequent risk to repeat this process the defect was not corrected adequately or because a new defect is caused by the resolution. An important side effect for unit test is also that they reduce the risk of a project. Each unit test is an insurance that the system works. Having a bug in the code means carrying a risk. Utilizing a set of unit tests, engineers can dramatically reduce number of bugs and the risk with untested code.

Unit tests will also decrease the maintenance cost because they provide a living documentation. This is called "Test as documentation". Unit testing provides a sort of living documentation of the system. Developers looking to learn what functionality is provided by a unit and how to use it can look at the unit tests to gain a basic understanding of the unit.  Unit tests embody characteristics that are critical to the success of the unit. These characteristics can indicate appropriate/inappropriate use of a unit as well as negative behaviors that are to be trapped by the unit. A unit test case documents these critical characteristics.

So it’s true that unit tests are generally doubling the initial cost of the implementation phase because it tends to cost the same amount of time as writing production code. But this cost is more than re-gain because the other steps of the production process shorten. The amount of defects detected by the Q&A team is drastically falling and a lot of time is won because the Q&A team can work faster. Even the overall throughout put of the development teams increases at the end because a time is not loosed anymore in correcting a lot of defects detected by the Q&A team. The project manager is far better at estimated the project status. At the end the trust of the business increases because they get features build on a constant pace and because the overall delivered quality increases.

 

Unit testing does not catch all bugs!

Unit testing and other forms of automated testing serve the same purpose as the automated testing devices in manufacturing. Unit tests will enable to detect rapidly a defect when code is changed or added. The automated tests are not made to detect malfunctions in production but to prevent that defects could enter into our assembly line.  The real value of Unit testing & TDD is not that they can detect defects but that they overcome defects to happen!  Unit testing will not only improve the quality perceived by the business because lesser defects will slip through it will also improve the internal quality attributes of the code itself because the developer will tend to refactor a lot more and will design his code more loosely coupled. (see Design for testability). 

Nevertheless Unit testing alone is not sufficient, testing should happen on all levels but unit tests decrease the amount of other kind of testing that is needed. Because unit testing helps to eliminate uncertainty in the units themselves they enable a bottom-up testing style approach. By testing the parts of a program first and then testing the sum of its parts, integration testing becomes much easier.

 

Testing is for the testers!

Unit testing and other forms of automated testing serve the same purpose as the automated testing devices in manufacturing. Unit tests will enable to detect rapidly a defect when code is changed or added. The automated tests are not made to detect malfunctions in production but to prevent that defects could enter into our assembly line. The real value of Unit testing & TDD is not that they can detect defects but that they overcome defects to happen! Unit testing will not only improve the quality perceived by the business because lesser defects will slip through it will also improve the internal quality attributes of the code itself because the developer will tend to refactor a lot more and will design his code more loosely coupled.

When software is developed using a test-driven approach, the Unit-Test may take the place of formal design. Each unit test can be seen as a design element specifying classes, methods, and observable behavior. By writing your tests you are performing an act of design and all professional developers should aim for good design.

 

Unit testing is a waste of time because they tend to break and we constantly have to fix them!

Unit testing allows the programmer to refactor code at a later date, and make sure the module still works correctly. The unit tests enables refactoring because they provide a safety net that allows us to practice refactoring. The procedure is to write test cases for all methods so that whenever a change causes a fault, it can be quickly identified and fixed. Readily-available unit tests make it easy for the programmer to check whether a piece of code is still working properly. They enable us to constantly improve our SUT by adhering to the DRY principles. This principle does not only apply to our SUT but also to our test code. By constantly keeping our testsDRY by eliminating duplication we improve the maintainability of our tests and make sure that these tests stay efficient. When you spent too much time in fixing tests you should consider to review your test design. Have a look at the following articles these will provide some guidelines that will help you in increasing the maintainability of your tests.

 

This code is too difficult to test! 

Because unit tests forces us to exercise our code in another context as the context in which the code will run in production – the tests forces us to design our code so that it is more loosely coupled. Loose coupling tend to improve reusability and robustness. Reusability and robustness are certainly desirable goals. So Unit testing is a way to assert that our code is robust and reusable – if your code is hard to test this is mostly because there is something wrong with your design and you should not try to test bad design but you should fix it!

 

kick it on DotNetKicks.com

 

 

 

 

How to distinguish an integration test from a unit test?

 

 

Although unit & integration tests serve different purposes, we’ve a tendency to confuse both types of testing. In fact most of the tests we write tend to be integration tests.  In my opinion the main reason why we confuse both is because we use the same test automation framework (e.g. NUnit) to write unit tests and integration tests. Nevertheless we should always separate our unit tests from our integration tests because Integration tests tend to be more fragile, slower and require more maintenance as unit tests.

As described in Wikipedia integration tests, test the integration between modules. Unit Tests targets atomic (indivisible) units/modules.  In my opion the notion of module is not enough to separate Unit from integration tests because a module is a subjective concept, it can be a applied for many things; a class, a Layer, a Component...  Therefore I prefer to make the distinction based on the fact that the test is dependent or not on some infrastructure.  When our tests are dependent on some infrastructure we can’t pretend anymore that our test is exercising a single module.  So as soon as our test is dependent on some sort of infrastructure like a DB, file, Web Service, COM component… it’s depend on at least two units (our code & the infrastructure) and it should be qualified as an Integration test.    

 

 

Keyboard support for Menu control Sylverlight 4

 

Today I tried to validate an  Architecture by creating a POC.  The architecture is an N-Tier using Silverlight for the client.  The customer as one particular request that, I thought, was reasonable: all actions – including menu navigation – has to be available through keyboard.  

When I tried Silverlight 4 I was surprised not to find any menu control so I downloaded several open source and commercial menu controls.  I was very disappointed, after having searched for a couple of hours I didn’t manage to find any control providing a decent keyboard support.  Most controls provides some basic support but not one control enabled to gain focus on the first item through the keyboard.  You are able to use the keyboard (arrow keys) but you need first to select the control with the mouse!  Not one control provided support for Keyboard shortcut. 

This is my shortlist of open source Sylverlight controls:

Codeproject free Menu : http://www.jebishop.com/2009/11/18/implementing-a-contextmenu-in-silverlight-4-beta/

Codeplex free Menu : http://www.codeproject.com/KB/silverlight/SL2DropDownMenu.aspx

 

Cannot start Microsoft Office outlook. Cannot open the outlook window

It’s now the third time I experience the same problem, for a strange reason outlook refuse to start anymore and I get the following error message: “"Cannot start Microsoft Office outlook. Cannot open the outlook window". Because I don’t want to google for it anymore I decided to put the resolution on my blog so that I can find it later.  

The solution is simple run:  Outlook.exe /resetnavpane

 

 

 

 

HTTP caching divergence between IE & Firefox

 Firefox&IE
I came across an interesting bug caused by the way IE and Firefox diverge regarding how they implement Http caching. Pictures displayed on a web pages were supposed to be refrehed every 10 seconds by a javascript timer. The problem was that the images didn't refresh on Firefox.

I used Fiddler & Firebug to analyze the HTTP traffic and rapidly came to the conclusion that this bug was related to the HTTP caching mechanism. To solve this bug I had to understand how HTTP caching works. The HTTP 1.1 spec describes how the caching mechanism of a web server should be implemented - this is a simplified version:
A) The client send an HTTP request to a server
B) Based on several factors (see below) the server will decide if it will serve a specific resource with a response code 200 or  return a 304. A response 304 means that the client should use his cache to serve the resource.

A web server returns a 304 when all the following conditions are true:
1. The resource on the server is configured to be cached (for most of the web servers this means that cache is not disabled for the specific resource)
2. The Last modified date of the resource is < than the Request date
3. The Request does not contain any header that disable the cache (see RFC)
4. The resource is fresh enough (see RFC)

In my situation the web server returned a 200 when using IE and a 304 when using Firefox. This was because condition 1 -2 & 4 were true but 3 was only false when using IE. The page displying the images contained a meta tag: <META HTTP-EQUIV="Pragma" CONTENT="no-cache">.  This tag instructs IE to add the header pragma=no-cache when it  request the image.  This HTTP header instruct the web server that it has to return a 200 (see condition 3). The problem was that Firefox don't understand this tag.

I solved the bug by tackling the root cause of the problem; I disabled the cache on the web server for this particular resource.


If this is not feasible in your situation other possibilities are:
- Disable the cache via the request by adding the proper meta tags, for Firefox a valid tag is: <meta content="-1" http-equiv="max-age" >.
- Make sure that the last modified date is updated correctly
- Generate a random character as a querystring after the jpg extension: e.g: myimage.jpg?201003321023

 

kick it on DotNetKicks.com

HTTP caching divergence between IE & Firefox

 Firefox&IE
I came across an interesting bug caused by the way IE and Firefox diverge regarding how they implement Http caching. Pictures displayed on a web pages were supposed to be refrehed every 10 seconds by a javascript timer. The problem was that the images didn't refresh on Firefox.

I used Fiddler & Firebug to analyze the HTTP traffic and rapidly came to the conclusion that this bug was related to the HTTP caching mechanism. To solve this bug I had to understand how HTTP caching works. The HTTP 1.1 spec describes how the caching mechanism of a web server should be implemented - this is a simplified version:
A) The client send an HTTP request to a server
B) Based on several factors (see below) the server will decide if it will serve a specific resource with a response code 200 or  return a 304. A response 304 means that the client should use his cache to serve the resource.

A web server returns a 304 when all the following conditions are true:
1. The resource on the server is configured to be cached (for most of the web servers this means that cache is not disabled for the specific resource)
2. The Last modified date of the resource is < than the Request date
3. The Request does not contain any header that disable the cache (see RFC)
4. The resource is fresh enough (see RFC)

In my situation the web server returned a 200 when using IE and a 304 when using Firefox. This was because condition 1 -2 & 4 were true but 3 was only false when using IE. The page displying the images contained a meta tag: <META HTTP-EQUIV="Pragma" CONTENT="no-cache">.  This tag instructs IE to add the header pragma=no-cache when it  request the image.  This HTTP header instruct the web server that it has to return a 200 (see condition 3). The problem was that Firefox don't understand this tag.

I solved the bug by tackling the root cause of the problem; I disabled the cache on the web server for this particular resource.


If this is not feasible in your situation other possibilities are:
- Disable the cache via the request by adding the proper meta tags, for Firefox a valid tag is: <meta content="-1" http-equiv="max-age" >.
- Make sure that the last modified date is updated correctly
- Generate a random character as a querystring after the jpg extension: e.g: myimage.jpg?201003321023

 

kick it on DotNetKicks.com

ASP.NET MVC with Webforms

source code can be found here 

It’s now generally admitted in the community that Unit testing and TDD (Test Driven Development) are valuable techniques when it comes to increasing the overall quality of our code. Nevertheless unit testing can be costly especially when you’ve applications with a lot of logic implemented in the UI. Therefore if we want to make our application testable we need to separates the UI from the rest of the application.

Martin Fowler described on his site some patterns that separate and diminish the UI logic to a bare minimum. They are all variants of the classical MVC (Model View Controller) pattern. The MVC split the application in 3 parts: the view handles the display, the controller that responds to user gestures and the model that contains the domain logic. The MVC is the foundation of very popular portal frameworks like Ruby on Rails.

To build web sites applying the MVC pattern with .Net developers can choose among several MVC frameworks like Monorail or the new ASP.NET MVC. In anyway, MVC frameworks like ASP.NET MVC are based on completely different paradigm as the ASP.NET Webforms framework. This means that you have to re-learn to program web apps from scratch. Another setback is that there are no ways to refactor your old ASP.NET applications so that they can fit into the MVC framework. I want to make myself clear, I believe that frameworks like Monorail or the coming System.Web.MVC are the future way of programming web apps in .NET but it demands a considerable amount of effort to learn new frameworks. It’s difficult for someone like me who has invested lots of years in mastering the classical ASP.NET code-behind model to re-learn everything from scratch. In the meantime this should not be an excuse to not make my code more testable.

In this post I will explicit through a simple example how to use the model view controller pattern on top of the code-behind model. We will create a login form with the MVC pattern.

Setup your solution

Create a new solution “Zoo” with 3 projects –>

  • ZooWebsite-> ASP.NET web appplication
  • ZooLibrary -> Class library 
  • ZooTest - Class library
  • Create a reference from ZooWebsite to ZooLibrary

    (ZooWebsite , add reference, project tab select ZooLibrary)

  • On ZooLibrary add a reference to System.Web

ne project

 

The View

To make our code testable it’s very important to be able to decouple the UI from the ASP.NET code-behind.  Therefore we will create an interface our ASP.NET page should implement.  This View interface will represent the contract the UI as to conform to.  When we will test our controller we will not do this with our actual web page but through a mock object that implements the View interface.    

Add an interface ILoginView on the project ZooLibrary:

   1:  namespace ZooApplication.Library
   2:  {
   3:       public interface ILoginView
   4:      {
   5:          string ErrorMessage { get;set;}
   6:          string EmailAddress { get;set;}
   7:          string Password { get;set;}
   8:          void RedirectFromLoginPage();
   9:          System.Web.UI.WebControls.Button BtnLogin { get;set;}
  10:      }
  11:  }
  • Edit the default aspx page and enter: Welcome you are authenticated!
  • Add the login.aspx to the ZooWebsite project.
  • Edit the source of the login.aspx part -> add two textboxes, a button, and validators:
<%@ Page Language="C#" AutoEventWireup="true" Codebehind="Login.aspx.cs" Inherits="ZooApplication.Website.Login" %>
 
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head runat="server">
    <title>Login page</title>
</head>
<body>
    <form id="form1" runat="server">
        <div>
                Login form<br />
                <asp:Label ID="LblErrorMsg" runat="server" Text="Invalid login" Visible="false" ></asp:Label><br />
                Email Address:
                <asp:TextBox ID="TxbEmailAddress" runat="server"></asp:TextBox>
                <asp:RequiredFieldValidator ID="RfvEmailAddress" runat="server" ErrorMessage="Enter your email address!"
                    ControlToValidate="TxbEmailAddress">
                </asp:RequiredFieldValidator>
                <asp:RegularExpressionValidator ID="RevEmailAddress" runat="server" ControlToValidate="TxbEmailAddress"
                    ErrorMessage="Invalid email address!" ValidationExpression="\w+([-+.']\w+)*@\w+([-.]\w+)*\.\w+([-.]\w+)*">
                </asp:RegularExpressionValidator></div>
            <div>
                Password:
                <asp:TextBox ID="TxbPassword" runat="server"></asp:TextBox>
                <asp:RequiredFieldValidator ID="RfvPassword" runat="server" ErrorMessage="Enter your password!"
                    ControlToValidate="TxbPassword">
                </asp:RequiredFieldValidator>
            </div>
            <div>
                <asp:Button ID="PageBtnLogin" runat="server" Text="Login" />
            </div>
        </div>
    </form>
</body>
</html>

Because the code-behind is not testable and don’t make part of the SUT (Subject Under Test) we want to diminish the code-behind logic to a bare minimum.   The view responsibility is limited to output  the data coming from our model in a human readable way and to expose user input to the controller.

Therefore we implement our interface through an aspx page that only contains a set of properties that binds data coming from the controller with our web controls. 

Generally we will try to implement all the presentation logic into the controller. The only exception here will be the RedirectFromLoginPage() method.

   1:  using System;
   2:  using System.Data;
   3:  using System.Configuration;
   4:  using System.Collections;
   5:  using System.Web;
   6:  using System.Web.Security;
   7:  using System.Web.UI;
   8:  using System.Web.UI.WebControls;
   9:  using System.Web.UI.WebControls.WebParts;
  10:  using System.Web.UI.HtmlControls;
  11:  using ZooApplication.Library;
  12:   
  13:  namespace ZooApplication.Website
  14:  {
  15:      public partial class Login : System.Web.UI.Page, ZooApplication.Library.ILoginView
  16:      {
  17:   
  18:          public string EmailAddress
  19:          {
  20:              get
  21:              {
  22:                  return TxbEmailAddress.Text;
  23:              }
  24:              set
  25:              {
  26:                  TxbEmailAddress.Text = value;
  27:              }
  28:          }
  29:   
  30:          public string ErrorMessage
  31:          {
  32:              get
  33:              {
  34:                  return LblErrorMsg.Text;
  35:              }
  36:              set
  37:              {
  38:                  LblErrorMsg.Text = value;
  39:              }
  40:          }
  41:   
  42:          public string Password
  43:          {
  44:              get
  45:              {
  46:                  return TxbPassword.Text;
  47:              }
  48:              set
  49:              {
  50:                  TxbPassword.Text = value;
  51:              }
  52:          }
  53:   
  54:          public Button BtnLogin
  55:          {
  56:              get
  57:              {
  58:                  return PageBtnLogin;
  59:              }
  60:              set
  61:              {
  62:                  PageBtnLogin = value;
  63:              }
  64:          }
  65:   
  66:          public void RedirectFromLoginPage()
  67:          {
  68:              FormsAuthentication.RedirectFromLoginPage(this.EmailAddress, false);
  69:          }
  70:      }
  71:  }

 

The model

It’s our model that is responsible to validate the user login and password against the DB. 

We create a DB named ZooDB:

  • Add an APP_Data folder to your ZooWebsite project
  • APP_Data, new item, Database

Execute this script to create a table Profiles on the ZooDB:

CREATE TABLE [dbo].[Profiles](
      [ProfileID] [int] IDENTITY(1,1) NOT NULL,
      [EmailAddress] [nvarchar](255) NOT NULL,
      [Password] [nvarchar](255) NOT NULL,
 CONSTRAINT [PK_Profiles] PRIMARY KEY CLUSTERED
(
      [ProfileID] ASC
)WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
) ON [PRIMARY]

Now configure your web.config file to add the connectionstring and the authentication part:

<?xml version="1.0"?>
<configuration>
  <connectionStrings>
    <add name="ZooDB" connectionString="Data Source=.\SQLEXPRESS;AttachDbFilename=.\App_Data\ZooDB.mdf;Integrated Security=True"/>
  </connectionStrings>
    <system.web>
      <authentication mode="Forms">
        <forms name="AuthCookie" path="/" loginUrl="login.aspx" protection="All" timeout="10">
        </forms>
      </authentication>
      <authorization>
        <deny users="?"/>
      </authorization>
    </system.web>
</configuration>

image002

Test the application, it should compile and we should be redirected to the login page

image004

Our model implements an Authenticate method. Again we will make use of interfaces to decouple the model from the controller.

On the ZooLibrary project create an interface ILoginModel:

   1:  namespace ZooApplication.Library
   2:  {
   3:      public interface ILoginModel
   4:      {
   5:          bool Authenticate(string emailAddress, string password);
   6:      }
   7:  }

 

The method Authenticate of the class LoginModel will check the validity of supplied email address and password.  You can implement the model with your preferred data access code.  Personally I use Subsonic because it’s really simple to use and it’s based on the active record pattern, the same pattern used in Rails.

But for the moment let’s use standard ADO.NET code:

   1:  using System;
   2:  using System.Collections.Generic;
   3:  using System.Text;
   4:  using System.Data.SqlClient;
   5:  using System.Configuration;
   6:   
   7:  namespace ZooApplication.Library
   8:  {
   9:      public class LoginModel : ILoginModel
  10:      {
  11:          public bool Authenticate(string emailAddress, string password)
  12:          {
  13:              SqlConnection conn = new SqlConnection(ConfigurationManager.ConnectionStrings["ZooDB"].ConnectionString);
  14:              SqlCommand cmd = new SqlCommand(
  15:                  "SELECT count(EmailAddress) FROM [Profiles] " +
  16:                  "WHERE EmailAddress=@EmailAddress AND Password=@Password",
  17:                  conn);
  18:              cmd.Parameters.AddWithValue("@EmailAddress", emailAddress);
  19:              cmd.Parameters.AddWithValue("@Password", password);
  20:              try
  21:              {
  22:                  conn.Open();
  23:                  if((int)cmd.ExecuteScalar()==1)
  24:                      return true;
  25:   
  26:                  return false;
  27:              }
  28:              finally
  29:              {
  30:                  conn.Close();
  31:              }
  32:          }
  33:      }
  34:  } 

The controller

The controller job is to figure out how the view should display the model. Therefore the controller should have an association with the model and the view.

  • Add a new class to ZooLibrary name it LoginController.

We start by defining a constructor that takes a ILoginView and a ILoginModel as parameters.
The Initialize method will be called by the page to instruct the controller to take control over the view and the model.
In the initialize method we will prepare the view to be rendered and subscribe to the events triggered by the view.

   1:  using System;
   2:  using System.Collections.Generic;
   3:  using System.Text;
   4:   
   5:  namespace ZooApplication.Library
   6:  {
   7:      public class LoginController
   8:      {
   9:          private ILoginView _view;
  10:          private ILoginModel _model;
  11:         
  12:   
  13:          public LoginControler(ILoginView view, ILoginModel model)
  14:          {
  15:              this._view = view;
  16:              this._model = model;         
  17:          }
  18:   
  19:          public void Initialize()
  20:          {
  21:              this._view.ErrorMessage = "";
  22:              this._view.BtnLogin.Click += new EventHandler(BtnLogin_Click);
  23:          }
  24:   
  25:          public void BtnLogin_Click(object sender, EventArgs e)
  26:          {
  27:              if (this._model.Authenticate(this._view.EmailAddress, this._view.Password))
  28:                  this._view.RedirectFromLoginPage();
  29:              else
  30:                  this._view.ErrorMessage = "Invalid emailaddress or password!";
  31:          }
  32:      }
  33:  }

clip_image002

 

Integrating the MVC into the page

When we program against an asmx page it’s always the page that receive the initial control from the ASP.NET framework.
So it’s the page that need to instantiate the model, the view and the controller.

   1:      public partial class Login : System.Web.UI.Page, ZooApplication.Library.ILoginView
   2:      {
   3:          private LoginController controller;
   4:   
   5:          protected override void OnInit(EventArgs e)
   6:          {
   7:              base.OnInit(e);
   8:   
   9:              ILoginModel model = new LoginModel();
  10:              controller = new LoginController(this, model);
  11:          }
  12:          protected void Page_Load(object sender, EventArgs e)
  13:          {
  14:              controller.Initialize();
  15:          }
  16:  
  17:       }
 

Testing

We are now able with the help of mocking frameworks to test the logic in the model and the controller.

The code here uses NMock but you can use your prefered mocking framework to implement your tests:
In the meantime the test for our LoginController should look like this:

   1:  public void LoginController_LoginTest()
   2:          {
   3:              ILoginView view = mocks.NewMock<ILoginView>();
   4:   
   5:              ILoginModel model = mocks.NewMock<ILoginModel>();
   6:              LoginController target = new LoginController(view, model);
   7:             
   8:              Expect.AtLeastOnce.On(view).GetProperty("BtnLogin").Will(Return.Value(new System.Web.UI.WebControls.Button()));
   9:              Expect.AtLeastOnce.On(view).GetProperty("EmailAddress").Will(Return.Value("unitEmail@test.be"));
  10:              Expect.AtLeastOnce.On(view).GetProperty("Password").Will(Return.Value("password"));
  11:              Expect.Once.On(model).Method("Authenticate").With(view.EmailAddress, view.Password).Will(Return.Value(true));
  12:              Expect.Once.On(view).SetProperty("ErrorMessage").To("");
  13:              Expect.AtLeastOnce.On(view).Method("RedirectFromLoginPage");
  14:   
  15:              target.Initialize();
  16:              target.BtnLogin_Click(view.BtnLogin, null);
  17:   
  18:              mocks.VerifyAllExpectationsHaveBeenMet();
  19:             
  20:          }

I hope this introduction to the MVC pattern has been profitable to you.

 

kick it on DotNetKicks.com