Monthly Archives: September 2008

Webservice Interoperability with contract first approach

We just solved an interoperability problem for one of our customer that was trying to interoperate between Java & .Net. He was using jaxws to generate his java proxy code and wcf (svcutil) for the .Net proxy . The customer had worked based on a contract first approach and tested his contract through jaxws. When he tried to generate the proxy classes for his .Net app, using svcutil, the generated proxy class was missing all the collections. He also had a runtime error when trying to serialize some messages. 

We rapidly found the cause of the missing collection. The customer had used maxOccurs=”unbounded” in his wsdl, what is not supported by the DataContractSerializer used by svcutil. We replaced the unbounded with a 9999999 and suddenly the collections appeared in his proxy class. 
But the runtime error was still there L
After some research we found an interesting article on MSDN describing the subset of  WSDL supported by SVCUTIL. Based on the article we discovered that the customer had used a <choise> element in a complextype. This caused the SVCUTIL to use the old XMLSerializer in place of the DataContractSerializer  of WCF.   The XMLSerialzer made some strange construct in the proxy class causing runtime errors. So we simply removed the <choice> element and everything worked fine after that.

GVD

Unit testing, what else?

 

In the preceding post I stressed why automated testing is so important for the distributed team, but I didn’t really specify what automated testing is. Automated testing is not just about uni tests and all tests created with a unit testing framework are not really unit tests! There are many flavors and terms to define automated testing.  There exist lots of other kinds of testing and this list is absolutely not exhaustive nevertheless it lists the most common types of automated tests:
 
          Unit testing
Unit test target a single class or package. They are written by the developer to test one single unit of code and must be isolated from all other components.
          Integration testing
Integration testing tests the integration of the different unit of code. In an integration test the different program units are combined and tested as groups in multiple ways. Integration testis can be done through the use of a unit testing framework of with black box testing tools.
          User acceptance tests
These can be block box test or integration test but used in the context of the acceptance of the application. They can be performed by the provider before delivery to the customer or by the customer before transferring ownership.
          Black box testing
Usually performed by Q&A but they can also sometimes used by developers having to change legacy systems that don’t have unit or pre existing integration tests. These tests take an external perspective of the SUT to derive test cases. These tests can be functional or non-functional, though usually functional. The test designer selects valid and invalid input and determines the correct output. There is no knowledge of the test object’s internal structure. 
          Performance/Load/stress testing
Although these terms are used to define different techniques they share the same purpose. Their goal is to test the performance of a system and not to find bugs.
 
As described in my previous post testing must occur at every stage of the development and by each part of the production unit. Nevertheless the types of tests used at a specific stage differ. Unit tests will be written during the implementation phase by developers. To avoid long and painful integration phases it is also recommended that developers write integration tests. They should test as soon as possible if their components integrate well with other already available components. Black box and load tests should be done after the implementation stage by Q&A teams. This is at the end of iteration when using an iterative process like Xp, Scrum or RUP or before deployment when using a linear process like waterfall. 
 

GVD

Automated testing and the software factory

 There is a theory in software engineering asserting that a relationship exists between the structure of a development organization and the architecture of the products that this organization produces. Recently a Harvard Business School study has shown that distributed teams tend to produce more modular software.
In my opinion the fact that distributed teams produce more modular software isn’t only because the communication dynamics force the distributed teams to align on interfaces but also because they realise that their software has to be tested thoroughly.   
To be able to enhance the testability of their software they need to make it modular and modular software is better software! 
Nevertheless testing has not only an impact on the product it has also a huge impact on the organization of the team. 
In this first post I enlighten why a distributed software factory should practice automated testing and how it affects the team.

 

Producing software is similar as producing consumer goods, both needs an efficient assembly line. The assembly line contains several parts, there is a customer team, a product owner, one or more development teams and a QA team. Like in a real factory these parts of the assembly line can be globally distributed. Every part of the assembly line is settled where the efficiency is the most optimal but all these pieces are working together for producing working software in one product development stream. They are all working with queues so that the teams can work asynchronously, maintaining a working speed dependent on one another.  

 

The productivity of a factory is measured by the speed at which products flaws from the production line and the effectiveness of the production line. As one may think, the speed at which products flaws out of the factory is not the average speed of each part of the production line but it depend mostly on the throughput of the slowest part of this production line. So if you want to increase the overall production capacity of a factory you’ve to synchronize every part of the production line and make sure that each part works at a constant peace and that every part is synchronized with his neighbours. The worsted thing that can happen in an assembly line is that a defect is caused in a part of the line and is paced at the next part. The defect will not only cause the part were the defect is detected to stop but the defect part has also to be resent to the responsible part so it can be fixed. This will desynchronize the production line and diminish the overall productivity of the factory.

 

 

 

In our modern industries when a defect happen in a manufacturing process, engineers will try to find the root cause of the problem and they will change the production process so that the defect can’t happen again. The way engineers are doing this is by incorporating automated testing devices in their production process. Software automated tests serve the same purpose as the automated testing devices in manufacturing. The automated tests are not made to detect malfunctions but to prevent defects to occur. The real value of our automated tests is not that they can detect defects but that they overcome defects to happen! 

 

 

http://www.toyota.co.jp/en/vision/production_system/image/spacer.gif
Type-G Toyaoda automated loom
http://www.toyota.co.jp/en/vision/production_system/image/spacer.gif

http://www.toyota.co.jp/en/vision/production_system/image/spacer.gif

Type-G Toyoda Automatic Loom

 

The Type-G Toyoda Automatic Loom, the world’s first automatic loom with a non-stop shuttle-change motion, was invented by Sakichi Toyoda in 1924. This loom automatically stopped when it detected a problem such as thread breakage.

 

 

 

Another core concept of modern manufacturing is Just-InTime. “Just-in-Time” means making only “what is needed, when it is needed, and in the amount needed.” Supplying “what is needed, when it is needed, and in the amount needed” according to this production plan can eliminate waste, inconsistencies, and unreasonable requirements, resulting in improved productivity. The worst enemy of Just-InTime is stock. The overall Just-InTime process is meant to eliminate every unnecessary stock. Stock means financial investing; they take in a lot of place resulting in more costs. Stock also dissimilates inefficiency in the assembly line.

 

Unfinished features are what best represent stock in a software factory. By unfinished features we mean code that is not delivered to the customer. The main reasons why code is not delivered to the customer is that we don’t know if it works or worse that we now for sure it don’t work because it’s bugged. Developers automated testing help us to reduce the amount of unfinished features. It shortens the Q&A cycle and the release cycle and helps us answer to the question, is the piece of code done.

 

 

Improve the

throughout put

It’s true that automated tests are increasing the initial cost of development part. In my experience writing automated tests tend to cost the same amount of time as writing production code. But this cost is more than re-gain because the other steps of the production process shorten. The amount of defects detected by the Q&A team is drastically falling and a lot of time is won because the Q&A team can work faster. Even the overall throughout put of the development teams increases at the end because a time is not loosed anymore in correcting a lot of defects detected by the Q&A team. The project manager is far better at estimated the project status. At the end the trust of the customer increases because they get features build on a constant pace and because the overall delivered quality increases.

 

  

GVD