Fear of needles, known in medical literature as needle phobia, is the extreme fear of medical procedures involving injections or hypodermic needles.
Wikipedia
Dependency injection solves properly interdependence between components, and improves de design of the implementation making easier maintenance and test-ability. This post is intended to understand dependency injection with a practical case on testing, and an explanation how to implemented with an specific framework on Java(Guice) there are more framework out that can do the same.
What is dependency injection?
If my 5 years old son asks me what is dependency injection, I will explained like this: If I need to go the some point of the city, I open my Uber application, I provide some basic information like: my position and my final destination, Uber will provide me car that will go to my place and will drive me to my final destination. So basically the complexity to find the car is hidden to me, like a client I don’t care about the technical details, I just need the service.
At this point some definitions :
Client : Me
Injector : Uber
Service : the car that will transport me
Interfaces : The “contracts” or technical conditions between Uber and the Cars
Dependency injection defines a border between the client and the Service, this is important for cases to provide flexibility to implement another services, the only condition that the new service(s) must be complaints with the contracts or interfaces.
Case #1 : Uber
Car : The service
/**
* Service : the car that will transport me
*/
public abstract class Car {
/**
* Contract
* @param myCurrentSituation current position of client
* @param finalDestination final destination of client
*/
abstract String transport(String myCurrentSituation,
String finalDestination);
}
public class FordFiesta extends Car {
@Override
/**
* Contract
* @param myCurrentSituation current position of client
* @param finalDestination final destination of client
*/
public String transport(String myCurrentSituation,
String finalDestination) {
return "<FordFiesta> takes the client from "
+ myCurrentSituation + " to " + finalDestination;
}
}
public class Tesla extends Car{
@Override
/**
* Contract
* @param myCurrentSituation current position of client
* @param finalDestination final destination of client
*/
public String transport(String myCurrentSituation,
String finalDestination) {
return "<Tesla> takes the client from "
+ myCurrentSituation + " to " + finalDestination;
}
}
Service + Interfaces : Binding
import com.google.inject.AbstractModule;
/**
* Uber have a several Ford Fiesta available close
* to the initial position of the client
*/
public class UberPoolOfFordFiestaCars extends AbstractModule {
@Override
protected void configure() {
bind(Car.class).to(FordFiesta.class);
}
}
import com.google.inject.AbstractModule;
/**
* Uber have a several car TESLA available close
* to the initial position of the client
*/
public class UberPoolOfTeslaCars extends AbstractModule {
@Override
protected void configure() {
bind(Car.class).to(Tesla.class);
}
}
The Client using the injector
import com.google.inject.Guice;
import com.google.inject.Injector;
import org.junit.Assert;
import org.junit.Test;
public class CarTest {
@Test
public void transport_On_Ford_Fiesta() {
Injector injector
= Guice.createInjector(new UberPoolOfFordFiestaCars());
Car car = injector.getInstance(FordFiesta.class);
String output = car.transport("my home in Mechelen",
"Ancienne Belgique in Brussels");
Assert.assertTrue(output.contains("<FordFiesta> takes
the client from my home in Mechelen to Ancienne Belgique in Brussels"));
}
@Test
public void transport_On_Testla() {
Injector injector
= Guice.createInjector(new UberPoolOfTeslaCars());
Car car = injector.getInstance(Tesla.class);
String output = car.transport("my home in Mechelen",
"Aeroport Zaventem in Brussels");
Assert.assertTrue(output.contains("<Tesla> takes
the client from my home in Mechelen to Aeroport Zaventem in Brussels"));
}
}
Case #2 : Real case on Testing
One client wants automate an application to calculate different loan-credit scenarios. In some cases the taxes and the business logic could change in function of time. Our automated test are using only the System.Datetime, so the challenge was to change the current time to simulate scenarios running with different times. Dependency Injection was useful to split the complexity to chose the different time providers.
Une civilisation débute par le mythe et finit par le doute Emil Michel Cioran – La Chute dans le temps
The question comes again and again, and it looks that it never will stop. What is testing? What is a tester? What is the ultimate goal for the tester? What is the goal of testing? This post is about the testing myth.
What is the tester?
A tester is the person who knows and understands the application.
The tester understands the business expectations, and has an overview on the technical implementation. A tester is “the” person that foreseen all the business/technical impact on the application, the tester will remind any impact that none team dreamed before : the business stake holders will ask him and listen him, idem for the developers, they need to be assured which component/s will be impacted with this new feature. Frequently I think that the tester is the “The Keymaker” on Matrix, he is not the architect or developer of the system, but he knows all the back doors and all the keys to the system, you can ask him any shortcut on the system, he will provide you the key and the exactly door
The keymaker – The Matrix Relaoded (2003)
The Keymaker : There is a building. Inside this building there is a level where no elevator can go, and no stair can reach. This level is filled with doors. These doors lead to many places. Hidden places. But one door is special. One door leads to the source.The Keymaker : Only the One can open the door. And only during that window can that door be opened.Niobe : How do you know all this?
The Keymaker : I know because I must know. It is my purpose. It is the reason I am here. The same reason we are all here.
Trinity : Where are you going?The Keymaker : Another way. Always another way.
The Keymaker : If one fails, all fail.
What is Testing?
Testing is the effort to become a tester.
If at this line you think that the previous sentence it’s wrong because I didn’t mention “quality” or “bugs” let me complete the thinking. Understand all the complexity in a evolving systems requirements, the resiliency and inertial of the application, all the teams, systems, variables, configurations and actors involved in the ecosystem make the task more difficult. But not impossible. Accomplish the full understanding of the system requires a methodology, a conceptual framework that will help the tester to … become a tester. The result of all this activity is the improvement in the reactivity of the team to search the quality, find errors and avoid them.
What is a test?
Is a an statement that is a thesis for the user, and an hypothesis for the tester.
The test is dynamic, transitive and deterministic point of discussion between all the actors, and enrich the communication between the members of the team. Has a value like a document about a small part of the application. All the tests of an application, are intended to describe the complete functionality of the system. In theory, if you give your set of test to any person, he must be able to understand the expected behavior of the system.
Try a shot! if a new member arrives on your team, give him/her the test set of your application.Ask a feedback about his/her point of view of the application after reading the test. Use this information to fine tuning your test set.
Myth #1 : Testing will provide quality
Quality is not provided for the Testing activity. Can help, and is true that is an important factor to achieve quality, but the quality is inherent to the application and all the actors involved. The challenge to the tester is help proactively to the team to find and avoid bugs.
Myth #2 : Testing will fix the system
Systems with severe quality problems, frequently, are because the cause of the problem is structural. And structural problems require structural solutions. Provide a testing framework (methodology and tools) is part on the solution, and the applicability depends on the test maturity of the team.
Myth #3 : My developer team has unit/integration tests, this is my testing framework
Unit testing are important in the verification of unit functionality for evolving code, and will help to redesign and assure about the impact in the different method and objects of the system. Integration testing, in the same way, will do the same, to validate the interaction between different components. But this is not a testing framework.
A Testing framework is a set of a methodology, description of process and interactions, conceptual definitions of about the “quality”, “incident”, “bug”, “priority”, etc means for an application, qualitative and quantitative indicators… and 1% of tooling.
On another hand, there are a lot of different kind of test to perform : functional testing, performance testing, load testing, monitoring, etc,etc.
Myth #4 :We don’t need a testing framework
I must admit, that after working 15 years on testing, I love to work on projects with this mind set. Is a beautiful challenge to convince people and organizations, break paradigmatic paralysis and show the benefits of testing like a catalysis in the search of quality.
Myth #5 : We do DevOps : Our testers are our developers
DevOps is a healthy evolution of Agile. In Agile that I practice 15 years ago, they forgot functional/performance/load Testing in the iteration, they forgot also to involve another key players on the iteration. In Agile approach they are limited only to unit/integration testing in the context of Continuous integration. DevOps is a proposal to enlarge the circle of teams not only for the testers, but also another players like infrastructure, data base engineers, architects, etc .
There is a misunderstanding on the conception of DevOps, to suppose that developer will take the place of the testers and everybody else.
Resumen La revolución industrial ha transformado profundamente nuestra sociedad desde principios del siglo XX, la influencia del pensamiento tayloriano para el ensamblaje en línea ha marcado nuestra manera de resolver los problemas tecnológicos con relativo éxito. Este artículo es una reflexión sobre el impacto en la industria del software de la metodología tayloriana hasta nuestro días, la evolución de la línea de ensamblaje de principios del siglo pasado hasta DevOps.
Metodología Existe la tendencia de confundir la metodología con las herramientas y los procesos particulares. Una definición de la metodología indica que es el conjunto de procesos de lógica e inferencia para realizar una tarea específica.
A diferencia del uso de herramientas y de los conocimientos específicos, la metodología posee una característica transversal. Es más llamado a ser un state of mind o una filosofía, y merece este concepto más que otro. El espíritu de la metodología debe ser abierto y de permanente cuestionamiento, es además un conjunto de conocimientos que permite la mejora y la optimización de procesos, y como toda filosofía es contraria a todo principio dogmático.
Nuestra referencia inequívoca de una metodología, en este caso una metodología de trabajo, es el estudio del trabajo de Taylor, nuestra sociedad actual como la conocemos está profundamente impregnada de procesos taylorianos: especialización del trabajo, producción en línea, toma de tiempos, etc. desde principios del siglo XX la empresa humana, en casi todas sus especializaciones han sido guiada por esta metodología, y esta influencia persiste hasta nuestros días.
En la definición científica de Taylor “división sistemática de las tareas, la organización racional del trabajo en sus secuencias y procesos, y el cronometraje de las operaciones, más un sistema de motivación mediante el pago de primas al rendimiento, suprimiendo toda improvisación en la actividad industrial.”, no se aplica solamente en las líneas de montaje de todas las industrias, la sociedad aplica este concepto en las diferentes ramas de la actividad humana: hospitales, la distribución jerárquica de los trabajadores, etc.
Build failed : Taylor
En el mundo IT, Taylor está presente, si hacemos un análisis de los procesos metodológicos en el desarrollo del software, el método cascada o waterfall, es la representación de un ensamblaje de un coche en la línea de producción.
En la industria del software esta metodología de desarrollo y concepción ha demostrado en más de 30 años su fracaso para satisfacer los exigentes requisitos de calidad y productividad, de manera independiente de los atributos no tangibles del software, la concepción tayloriana es inadecuada para el desarrollo de software.
La línea de ensamblaje concebida por Taylor, todas las interacciones que son necesarias para realizar el ensamblaje de diferentes piezas, siguen un esquema y un patrón exacto y finito, que asegura la homogeneidad en la calidad del producto final: para un mismo modelo de producto final, no existe variación posible en el conjunto de piezas montadas. En la línea de ensamblaje de software, no existe un esquema definido puesto que las piezas que se ensamblan no son las mismas (variabilidad), para alcanzar la elaboración del producto final (build) la línea de montaje en el desarrollo del software es única, efímera e irreproducible.
Cada componente de software, tiene una complejidad intangible, la lógica business es tan particular que cambia de cliente a cliente en el mismo sector de actividad. Hay, sin embargo, una constante que es subjetiva y permanente: La calidad.
Un análisis superficial podría reducir la diferencia entre la línea de ensamblaje típico y de software por la naturaleza intrínsecamente intangible del software, a diferencia de una pieza industrial que posee características de peso, dimensiones, etc. La calidad para un producto típico se basa en la medida de estos atributos. Para un componente de software la calidad se mide sobre la propia calidad del software, es decir sobre sus valores intangibles de los usuarios que son transcritos en un documento de análisis funcional.
“Out of Crisis” de W. Edwards Deming que ha sido escrito en el auge del Taylorismo. Demming propone algo diferente a Taylor cuando habla de la modificación de management (Principles of transformation of western management) y el paralelo entre Agile es innegable cuando habla de eliminar las barreras entre los departamentos y entre los equipo de trabajo, de institucionalizar la cultura del trabajo y el auto aprendizaje.
Agile
En los últimos 20 años, de metodologías emergentes han tratado de mejorar la productividad en el desarrollo industrial del software, para la metodología Agile el epicentro de todo el proceso de producción es el ser humano. Y la propuesta es audaz, porque es el ser humano el actor más flexible y versátil de esa línea de producción pero el más inestable también. La metodología Agile propone reducir al mínimo necesario todos los procesos que se encuentran alrededor del verdadero objetivo: la calidad. Los valores primordiales son la comunicación, la responsabilidad, la iniciativa personal, y evidentemente la improvisación del día a día.
Agile, es una metodología que no es perfecta, en la misma medida de proponer disminuir el “ruido” y los procesos administrativos a los que el equipo de desarrollo, testing e infraestructura están confrontados, en Agile no hay una explicación explícita como ponerlo en práctica. Y es normal: es una metodología, son los ingenieros de procesos los llamados a optimizar el ser humano con las herramientas y el procedimiento. Ese es el reto en Agile, esa es su dificultad. Sus valores son:
Proactividad individual e interacciones son más importante que los procesos y las herramientas
Software que funciona (BUILD) es más importante que una documentación extensa
Colaboración con el cliente es más importante que una negociación
Improvisación más que seguir un plan
En estos valores, lo que resalta es la importancia de la interacción humana que debe producir el software ejecutable (BUILD) esta dinámica tiene como único objetivo de optimizar todos los procesos para encontrar la línea de proceso más óptima.
DevOps
La década del 2000 fue una etapa importante en el reconocimiento de la industria del software que la única forma de desarrollar software de manera correcta en términos de calidad y productividad es alejarse del método tayloriano. La adopción de Agile fue un paso en esa dirección, pero recordemos nuestra definición de metodología, el rechazo absoluto a los valores dogmáticos. Agile demuestra que no es una metodología perfecta, su concepción orientada sobre el código de programación es su punto más débil, la ausencia del concepto Testing Funcional, la falta de la implicación de otros servicios de Testing, Infraestructura, Perfomance, Monitoring, y un sistema de calidad orgánica son simplemente obviados.
La presión de entregar software funcional, de alta calidad forzó a los tecnólogos a resolver este desafío, la respuesta fue formalmente presentada en una exposición en Bélgica en 2008. DevOps, una abreviación de Desarrollo y Operaciones en Inglés. Esta metodología es mucho más orgánica y sistémica que Agile, no es solo una evolución natural de Agile, es todo una filosofía alrededor del concepto Lean, si bien todo el punto de equilibrio de esta metodología se soporta sobre los valores comunes de Agile, es una evolución importante en la concepción y soporte al desarrollo del software.
DevOps es el esfuerzo de integrar en un solo ciclo o iteración Agile los procesos y operaciones que siguen el proceso de desarrollo: testing, operaciones, infraestructura, y soporte.
Requisitos para aplicar Devops.
Es una pregunta lógica, la respuesta es compleja por el grado de compromiso así como las habilidades tecnológicas de los diferentes equipos de desarrollo, testing, infraestructura, especialistas de build & deployment, es lo que se conoce como “ALM Maturity”, un equipo debe ubicarse entre el nivel de “Competencia Funcional” y “Excelencia Funcional”
Además un equipo que aspira a plicar DevOps debe contar:
Comprobada experiencia Agile de los equipos : Ser un equipo Agile no es una meta en sí, es un esfuerzo constante por lograr la Agilidad en el Equipo, demanda mucho esfuerzo y en concreto el equipo debe poseer una madurity en :
Test Unitarios y de Integración automatizados
Build y Deployment automatizado
Opcional : Test de regresión y aceptación automatizados
Un sólido equipo de Infraestructura y Operaciones con experiencia en automatización
Un equipo de Testing dedicado
Una estructura jerárquica horizontal que permita la eliminación de silos y falta de comunicación
No existe una receta que garantice el éxito de DevOps. La parte más difícil no es el aspecto tecnológico, sino de conceptualización y cuestionamiento continuo del aspecto metodológico. Una vez más es la metodología establece la línea a seguir.
Resumen
El taylorismo es una metodología de trabajo que no es óptima para el desarrollo de software, los ciclos de trabajo son largos y rígidos, y no existe el concepto de iteración en la línea de producción, el cliente se encuentra al final de la línea, él no participa activamente en el proceso de desarrollo del producto.
Agile es una metodología de desarrollo de software que propone de cortos ciclos de desarrollo e incorpora al cliente en la validación de la calidad. Agile es una metodología centrada en el desarrollador o programador y el cliente. Uno de los puntos débiles en este enfoque es el hecho que Testing, las Operaciones de soporte no están incluidas en el ciclo o iteración (sprint).
DevOps es el intento de introducir en el mismo ciclo Desarrollo, Testing y Operaciones, es muy importante la automatización para disminuir las variaciones en el proceso. Actualmente DevOps es una buena opción metodológica para el desarrollo, testing y deployment de aplicaciones de software.
Light creates understanding, understanding creates love, love creates patience, and patience creates unity
Malcolm X
The efficacy from analyze to execution of an automated test is required to conceptualize design and execute successfully the test. In this article I propose a guideline to accomplish the challenge to create an automated test.
Dissection
Life cycle of the test
The list of business or technical requirements, describe the expectations of the system with a certain degree of precision or not. A tester, another stakeholders and process involved on the application life cycle will clarify this list of requirements.
Requirements will certainly evolve in time, the symbiosis between test and requirements will define the validity and pertinence of the test and provide more degraded test cases. Test must always be aligned with a unique business-goal or technical-goal (atomic principle of test), and keep on mind the add value of your test for the client or design of your software. Translate in a simple sentence using the same jargon of your client. This concept is independent of any technical implementation, so forget (at this point) the nature (manual or automated) of the execution of test.
First : Try to execute it manually
Try to execute and re-execute manually the test to be sure that you understand the procedure to execute the test. At this point some issues should be revealed :
Which pre conditions are required to execute the test?
It is an atomic test? Each test must be independent from another test
It is repeatable?
It is “fast”?
Write down input/output required
Is the system to test stable?
Post conditions are required?
Note : What if is an API test? try to use some tools to reproduce the code, for instance : Postman, or another similar tool
Test Anatomy
Technically the structure of test can be different in function of the technology used, the representation must have the same topology. I write down this in pseudo-code
Structure of an automated test
In the same logic, a test belongs to another set of test : Test Cycle. The patter can be applied also for each set of test
Structure of a Test Set or Test Cycle
Tips
Identify the factors that could break you automated test and you can not control : change of layout in interface, malfunctioning device, internet off, network issues,DB offline, API backend off etc.No silver bullet to solve these issues. Isolate the issues and propose technical solutions that can be described in the preconditions.
Precondition on Test Cycle : Validate that all the conditions are ready to start the execution of the test cycle. For example, setup server, configuration, etc
Precondition on Test Cycle: Execute a sanity check about the graphic elements that the APP must have.
Precondition on Test Cycle: Execute integration and unit test of your ow test framework
Preconditions on Test And / Or Test Cycle : Mock or create virtual services that test can control.
Test centered on business goal/technical goal. Reduce the intermediate points or interfaces that are not part of test. If you are testing a “Shopping cart” and the business goal is verify that if you change the quantity of items the price to pay must be updated, don’t waste your time in the ‘previous’ steps : open a session, browse a category, select different items, and finally test your “shopping car”. This kind of test is designed to fail. Try another strategy :
Precondition on test : Inject in the DB / Backend the items chosen.
Another option : Mock your DB or Backend
Precondition on test : Create a login session programatically to avoid the login process
Precondition on test : Create a link to “redirect” the test directly to the “Shopping car”
Like I wrote in a previous article, the test must be “clustered” with test with the same functional axis.
“Les doutes, c’est ce que nous avons de plus intime.” Albert Camus.
Building software, following the principles of TDD (Test-Driven Development) will help to assure the quality along all the development process and complemented with Integration Test, and Continuous Integration, good practices and methodology. This posts is intended to explain this subject.
Software development is not only developing a set of functionality but also deliver a software that match quality requirements. But, What’s is quality? Here a random definition of quality: “In business, engineering, and manufacturing, quality has a pragmatic interpretation as the non-inferiority or superiority of something; it’s also defined as being suitable for its intended purpose (fitness for purpose) while satisfying customer expectations. “(Source : Wikipedia). Let’s keep the concept of “satisfying customer expectations” to explain later the concept of TDD (Test-Driven Development). A “customer” expects from the “software” a set of functionality to accomplish a specific task, if the functionality is suitable for the intended functionality : following this premise this is what we could call quality.
I have a problem with this definition of quality, first because is a static definition, and secondly because a “customer” expectation never ends up. Let me explain me this. Interaction between the customer and the software application evolves in the time in function of the requirement of the client. Customer requirements evolves and change all the time, quality definition in this dynamic interaction is the reactivity of the software application to deliver in a suitable way the functionality on demand.
“Test-development driven” is a methodology to deliver software linking “quality” and “change”, that is the add value in this practice. The implementation of this methodology is of a awesome simplicity : this is implemented using “an assessment intended to measure the respondents’ knowledge or other abilities”, this is called “Test”.
TDD ask you to write the test before the code implementation (Code Clean pp 122).
Hands-on!
Practical case: <Implement a software that execute for 2 integers these 4 operations >:
Addition
Subtraction
Multiplication
Division (Round to the lower Integer)
First Step : Your step must fail
Let’s write our first test for the Addition.
import org.junit.Assert;
import org.junit.Test;
public class CalculatorTest {
@Test
public void addition() {
Calculator calculator = new Calculator();
Integer result = calculator.addition(7, 3);
Assert.assertTrue(result == 10);
}
}
An object Calculator exposes a method addition
public class Calculator {
public Integer addition(int i, int j) {
//no implementation for this test
throw new NotImplementedException();
}
}
Run the test addition. Must fail
First step : Your test must fail
Second Step : Implementation, your test must pass
At this point you must complete the method “addition”, in this case is quite obvious, i.e.:
public class Calculator {
public Integer addition(int i, int j) {
return i + j;
}
}
Third Step : Repeat step 1 & step 2 for the other operations
Your code at this point must look like this.
import org.junit.Assert; import org.junit.Test;
public class CalculatorTest {
@Test public void addition() { Calculator calculator = new Calculator(); Integer result = calculator.addition(7, 3); Assert.assertTrue(result == 10); }
@Test public void substraction() { Calculator calculator = new Calculator(); Integer result = calculator.substraction(7, 3); Assert.assertTrue(result == 4); }
@Test public void multiplication() { Calculator calculator = new Calculator(); Integer result = calculator.multiplication(7, 3); Assert.assertTrue(result == 21); }
@Test public void division() { Calculator calculator = new Calculator(); Integer result = calculator.division(7, 3); Assert.assertTrue(result == 2); }
}
All your tests a passing
View in the unit tests
Analysis of existing code
At this point the application provide the functionality required for the client. The tests are OK. Let’s see closely the code written and let see the architecture so far.
There is only 1 object “Calculator” that encapsules all the operations ,this design “simple” will be the source of problems and headaches in a very short horizon. Why? At first place, an object in the system must have only one responsibility, and must be a single point for a vector of functionality in whole the system. This concept is know like “Single responsibility principle SRP”. You can find this concept large explained in different sites. I am more interested in another concept that allows SRP, is empowerment, or delegation of responsibility. In another words identify the vectors and isolate them in singles objects. For our case
Monolithic designDesign with single responsibility
At the beginning of any software project the functionality borders are not clearly defined, creating small/lights objects with a single functional value, add flexibility to the architecture. This approach assures the resilience in the final design.
Let’s apply TDD to disaggregate the responsibility from the object “Calculator”
public class Addition { public Integer operation(int i, int j) { return i + j; } }
public class Division { public Integer operation(int i, int j) { return i/j; } }
public class Multiplication { public Integer operation(int i, int j) { return i*j; } }
public class Substraction { public Integer operation(int i, int j) { return i-j; } }
In the same order, our unit tests will be modified to support this refactoring. The object Calculator can be deleted.
Remember to run the test cases after the refactoring
Abstractions
Abstractions are flexible structures that help reshape the design of the software in a clean way not only in the disagragation of responsibilities. Abstractions are perfect to isolate functionality and allow unit or integration tests (mock) etc.
In Java or CSharp this abstractions could be <<Interfaces>> or <<Abstract>> classes.
All the object implement an “operation”. The signature is the same for all these object. We reduce the “weight” of each object is we make an higher abstraction around the existing objects.
Extract abstraction of concept Operation
Here the implementation
public interface Operation { public Integer operation(int a, int b); }
public class Addition implements Operation { public Integer operation(int i, int j) { return i + j; } }
public class Division implements Operation { public Integer operation(int i, int j) { return i/j; } }
public class Multiplication implements Operation { public Integer operation(int i, int j) { return i*j; } }
public class Substraction implements Operation { public Integer operation(int i, int j) { return i-j; } }
Once again verify if the implementation is correct. Run the unit test cases again
Conclusions
TDD allows us modify our design of software. In software, modifying and redefining a software is a normal process in the life cycle of the product : the process is called refactoring. So TDD is permanent process, only it stops when the software is deprecated.
Single functional vectors in a system (SRP) helps to reshape and maintain the code easily even when the functional borders are not clearly defined by the client. Having small and light structures with low logic capacity make us possible change fast and with low impact in the whole system.
Abstractions are structure that have a lot of advantages :
allows to implement design more lose coupled,
when you design evolve you can modify your “contract/interfaces” and control the consistency of your design in the object that implement your abstraction
Scenario: Open a book card
Given The user is "albert" and the password "@bsurd"GivenCleanup any existing item in the basket of the customer
ThenOpen the app
AndLogin
AndSearch for a book "Le Mythe de Sisyphe"
AndOpen the card
Cucumber, is a BDD framework (Behavior-Driven Design). BDD allows osmosis between the business / functional tester with the automation code. This border define the responsibility and the communication between the teams around a software.
Behind the lines, you use any programming language to implement the automation. Let implement something fast. I advise strongly the implementation of Java for Cucumber.
0. For this case, Intellij Idea will be used like IDE with io.cucumber for Java. Here my gradle.build
plugins {
id 'java'
}
group 'groupid'
version '1.0-SNAPSHOT'
sourceCompatibility = 1.8
repositories {
mavenCentral()
}
dependencies {
testCompile 'io.cucumber:cucumber-java8:3.0.2'
testCompile 'io.cucumber:cucumber-junit:3.0.2'
testCompile group: 'junit', name: 'junit', version: '4.12'
}
1. Install the plugin Cucumber for Java for the environment of development
2. Add the feature file Book.feature. Pay attention that the file has been added inside the folder resources of test
3. Add the test inside the feature file
4. Implement the code behind. It’s necessary write the code to allow the automation. Put the cursor in the line that you will automate i.e. in the line Given The user is “albert” and the password “@bsurd”. Press ALT+ENTER at the same time, you will see this window dialog.
5. The code will be generate to be completed later.
import cucumber.api.PendingException;
import cucumber.api.java.en.Given;
public class MyStepdefs {
@Given("^The user is \"([^\"]*)\" and the password \"([^\"]*)\"$")
public void theUserIsAndThePassword(String arg0, String arg1) throws Throwable {
// Write code here that turns the phrase above into concrete actions
throw new PendingException();
}
}
6. Do the same with the rest of lines. You will have this
import cucumber.api.java.en.And;
import cucumber.api.java.en.Given;
import cucumber.api.java.en.Then;
public class MyStepdefs {
@Given("^The user is \"([^\"]*)\" and the password \"([^\"]*)\"$")
public void theUserIsAndThePassword(String arg0, String arg1) throws Throwable {
}
@Given("^Cleanup any existing item in the basket of the customer$")
public void cleanupAnyExistingItemInTheBasketOfTheCustomer() throws Throwable {
}
@Then("^Open the app$")
public void openTheApp() throws Throwable {
}
@And("^Login$")
public void login() throws Throwable {
}
@And("^Search for a book \"([^\"]*)\"$")
public void searchForABook(String arg0) throws Throwable {
}
@And("^Open the card$")
public void openTheCard() throws Throwable {
}
}
7. Configure the Scenario. This is necessary when the test will be executed in another environment out Inteliij IDEA
Create the feature
8.Execute the test, no matter if there is nothing implemented in behind
9. The test will pass
10. Obviously you will ask what to automate. This will see in next post.
Rest API expose methods POST/ GET / PUT , etc on the protocol HTTP : this make us simple our life: Rest API allows interoperability in a very simple way.
Before any step, is strong advised understand the basic principles of HTTP (I’m planning to write something about this later).
Situation
A rest endpoint exposes the list of users of a lambda application, you need to extract this information and find the user “Charles”. The endpoint is
If you open the link you will see this json structure.
Problem
Get the metadata of user “Charles”
Solution
We will solve this problem using TDD (Test Driven Development). If you want to write a clean and scalable code use TDD. Besides this, to understand / learn anything use TDD.
0. For this excercise we will use retrofit and appache.httpcomponents to handle the call GET
@Test
public void findCustomerByName() throws IOException {
ProxyCustomer customerProvider = new ProxyCustomer();
Datum customer = customerProvider.findCustomerByName("Charles");
Assert.assertTrue(customer!=null);
}
Assume that the interface with the endpoint will be the responsibility of ProxyCustomer, this object will implement a method findCustomerByName that receives a parameter (name in this case). The test validate that at the end of the call an object Datumdifferent to null will be returned.
3. Let create the object ProxyCustomer with a simple method findCustomerByName .
public class ProxyCustomer {
public Datum findCustomerByName(String name)
{
//For the moment we return null
return null;
}
}
and a second object called Datum.
public class Datum{
}
4. Run the test findCustomerByName(). This test must fail.
The essence of the beautiful is unity in variety. W. Somerset Maugham (English writer 1874/1965)
One common issue underlying the execution of tests (manual and/or automated) is the grade of dispersion of each test.
What is a test dispersion?
Test dispersion is when the tester become myopic or which means short-sighted to see “the whole picture”. Executing/Analyzing each test case knowing the value of each test case in function of the whole application is important to measure the impact on the tested system. Test dispersion increase the time of execution, analysis and maintenance. This lead to an non efficient use of the test resources.
In a medium/large test project with an important number of test cases, for the tester is really easy get lose the coherence of the total system. It’s the same situation for automated frameworks, the engineer test automator needs to deep inside the specific test case, deal with the information required to execute this test and write the code to get that test automated/executed, all this knowing the “weight” of this test on the whole system.
Try a discussion with your colleges around a test cycle with low cohesion, the conversation is located around the failing test, the context is absent, it depends of the audience to ask more questions about the real value of that functionality. Estimate the impact of this fail requires more interaction between the 2 testers. This lead to an non efficient communication.
Figure 1 . Each rectangle represents a test. The color indicates a real business function Is the representation of the test cases not sorted using a functional parameter. I.e. number id of the test cases, alphabetic order, etc
Causes
Test dispersion is an (un)expected result of the best practices for software testing : a tests must be independent, fast, repeatable always in a deterministic context, etc. Apply these principles to each test case is an inefficient way to manage the test resources (humans, methodology and test framework, etc). Another causes are
Weak test design
Writing / Executing test cases sorted in function of a technical component, i.e.: Database, API, Web services, Android Phone
Writing / Executing test cases sorted with a number id, alphabetic order
Why using technical components lead to test dispersion?
Technical components are frequently used like references to create bundles of test. Is a legacy of the mindset of the developers. Even if the software architecture is clean this logical layout not always are aligned with a real functional module. Some times these technical modules are much more transversal or too small to contain a real business value. For example :
Databases
API
Web Services
Android Phone
iPhone, etc.
How to increase the test cohesion in my test cycle ?
Introduce Business Cardinality : Create business logic bundles of tests inside the test cycle OR split test cycle using these business logic bundles.
Eliminate Technical Cardinality : Stop using technical components in order to create test cycles or sort your tests.
Stop executing test using random parameters : test id, alphabetic order, etc
Figure 2. Test bundle using a functional parameter
The idea here is create a bundle of test using a functional parameter. The bundle must have a real functional name and value. For example in this case the bundle of tests are related to “customer” or “account”.
What to do if a test case is a mix of different functional values., for these kind of test is better create a completely different bundle id : for example “portal” or “account”.
What to do if a test is completely not related to any functional bundle? In these cases it’s better to ask if the test case is correctly formulated. If it is, there are 2 possible options :
Let the test case like that, or
Create a bundle of test cases with the bundle id “miscellaneous”.
Any way this is not an ideal case, it’s preferably to think the coherency of these isolated tests cases.
Advantages
Enrich the language and communication during the planning, execution and analyse of a test cycle, is more coherent to talk and discuss around functional blocks. Is easier analyse the impact of failing group of tests inside a functional module. Discussions with a test cycles with high cohesion the testers talk about modules and functionalities, the failing test is only an indicator. The conversation will be more productive and the measure the impact involves more parameters.
Create logic reference inside the test cycle and the relationships between another bundles
In reporting you can isolate the results in function of the bundle.
Hands-on : How to create these bundles in my tests?
A simple way to create this functional blocks is using some kind of label or tag to identify the tests. There are different test manager tools on the market to talk about each one. In Jira this can be done with the field “Components” or using labels, or Epics.
If you are using a test automation framework, align the business bundles used in the test management tool with your own configuration in your test platform. Try to use the same naming convention in all the context.