navigation

Learn from Java Champion Nacho Cougil: Exploring TDD and Other Advanced Concepts in Modern Software Development

Learn from Java Champion Nacho Cougil: Exploring TDD and Other Advanced Concepts in Modern Software Development

Nacho Cougil is a software engineer from Barcelona, fan of eXtreme Programming (XP) practices who has been working with Java and other web technologies before the Y2K. He has experience in different roles in the IT world (‘guy-for-everything’, developer, team lead, project manager, and CTO) working in companies in many sectors: energy, e-commerce, startups (some unicorn) technology consultancy, and application performance management. Now Nacho works as Principal Software Engineer at Dynatrace developing software for monitoring applications and he’s also a Teacher in the Master DevOps & Cloud Computing Master at Barcelona University (UB) and also a Java Champion.

Nacho is always concerned with questions like “How does it work?” and “How can we make it better?”, and loves to share his knowledge with others and at the same time learn from others. Probably because of this he founded the Barcelona Java Users Group ( https://meetup.com/barcelonajug ) and co-founded the Barcelona Developers Conference (DevBcn – https://www.devbcn.com), former Java and JVM Barcelona Conference (JBCNConf).

Dreamix: You mentioned that you are more focused on teaching teams development practices and approaches (like TDD, CI/CD, feature flags etc.). In a hypothetical scenario, if you were to build a team yourself how would that team operate? What technologies would you use?

Nacho Cougil: If it were up to me to create a team, I would look for such a team to have certain characteristics: to be autonomous, to have the capacity to make decisions and to carry them out, both from an operational point of view (e.g. deployment to production) and from a technological point of view (e.g. use of a new framework). From this same angle I would bet on the team working on designing a solution based on XP principles (simple design, refactoring, continuous integration, etc), and putting all the focus on delivering working code, i.e. delivering value from minute 0. From there, I would add that building a team is not as easy as it seems: people are complex, and in the end we have to fit together, and many times this is left aside and it is equally (or more?) important (to leave the ego aside, accept other opinions, collaborate with others, etc) than the tools or the technological stack we use.

About which technologies I would use, well, I would probably answer first with a typical standard answer: “it depends” 😉 And obviously we would introduce here multiple factors (e.g. if there is a current solution that provides something similar and adapt it VS build from scratch, knowledge/capabilities of the team, who would maintain it afterwards, objective of the project, medium/long term support, open source or proprietary, licensing costs, etc) that in one way or another would surely help us to narrow down which technologies I could or could not use and under which conditions. If we were talking about building a “standard web project”, I wouldn’t go for anything out of the ordinary: I would use opensource technologies including client-side web (Javascript) and server-side (Java). To be more specific, I would probably use frameworks like Vue or React on the frontend side and Quarkus or Spring Boot on the backend side.

Dreamix: You mentioned that you favour simple design and maintainable solution and who doesn’t 😀 Can you describe in high level an architecture that you are going to follow if you have to implement an application having a modern frontend, backend and relation database that should support medium user loads (5-10k monthly active users)?

Nacho Cougil: I am not a big fan of designing “big” architectural solutions without having any incremental iteration, that is, that you can run and verify that each of the delivered functionalities can be “triggered” and verify that they do what you expect them to do.

That said, on the basis of the above premises, I do not think it is necessary to design an architecture much beyond:

– building a stateless API (that can scale based on the number of active requests/users),

– a cache system that prevents access to static web content (+ reverse proxy), and

– (perhaps) an in-memory cache system to access/serve the most common and fastest content from the backend (should this prove necessary).

From this point of view, we should of course ask ourselves what kind of use cases we will have to implement to identify the type of API’s we need to build (gRPC or REST?) or whether even an event-driven model would be more suitable for the business model/type of business we intend to develop and how we will support it through a relational database model.

Dreamix: What would be your choice of a tool to implement CI/CD? Are you sticking to Jenkins, maybe Github actions workflows or GitLab CI/CD. Also do you have recommendations of the steps and how a good CI/CD should look like for a modern Java/Spring application?

Nacho Cougil: I have no particular preference for working with any CI/CD system, as long as it allows me to maintain a smooth and continuous integration and delivery flow. That is, for me it is just a tool that has to allow me to “define what the team needs to deploy changes“, in short, no more and no less. Obviously, all the tools you have mentioned serve this purpose, so from there, the one that offers the least friction and the most agility to the team (following company policies, etc.) would be the one I would use.

I’ve used all of them and perhaps the one I’ve liked the most is Gitlab because of its ease and flexibility, and also because its documentation is very well maintained (especially VS that of Jenkins for example) and because most things work first time and without problems.

For me, a good CI/CD system has to be the support to deploy changes to any environment the team needs with minimum effort (verifying that everything is correct, etc). So how the pipeline is defined, if you have to add a step more or less, if you have to define the access keys to a git/secrets repository here or there or if you have to touch a .yaml file or go to a UI for it, I don’t see it as important (as long as the team knows/learns how to do it).

However, perhaps what I would emphasise above all is that such a system should be robust and scalable (it was a given, wasn’t it? no weird limitations with runners, etc) and that the team should be aware that there are always scenarios that nobody thinks about, such as how to rollback changes to anything deployed or how to manage changes on old versions, which -I humbly think- should be almost transparent to the CI/CD system (in the end a change of code X or Y should follow almost the same deployment flow unless there are unsurpassed dependencies/inherencies that could not be eliminated).

Dreamix: On the side of simplicity, what are your thoughts on monolith vs microservices? Would you consider a new application to be started as a microservice one from day 1?

Nacho Cougil: I don’t think microservices are “the solution” to everything, but they clearly make sense in some areas. For example, I think for very large applications and companies where the evolution and management and communication problems are so great that they become necessary, but in my humble opinion starting to develop a microservices-oriented application may not always be the most appropriate thing to do. Among others, the development, deployment, testing (oh gosh!) and evolution of a microservices oriented architecture is a challenge VS the speed and agility that development teams can have when working with a monolith.

It seems that a monolith sounds “old” and even “bad” from a development point of view, but in my experience it is not bad at all. In fact, I have experience with projects worked as monoliths in which, once correct boundaries have been defined, with clear limits and rules that avoid incorrect couplings, circular dependencies, etc. with good results. It does indeed require an extra effort at the Developer Experience level, but doesn’t working with microservices require it? How many projects oriented to microservices are just as efficient to develop, test and deploy changes in such an agile way as a project with monolith architecture?

Dreamix: We know that there are general guidelines to doing TDD but what exactly is your practical approach? Do you have advice for people willing to learn doing it? What are the biggest challenges you see when introducing teams to a TDD approach?

Nacho Cougil: In my personal opinion, the best way to start is in a safe space, either in personal projects or in Communities of Practices (CoP) where it is not a problem if you get “stuck” or blocked at some point. Practising and applying TDD correctly requires a set of prior knowledge and skills, including knowledge of the language, software engineering and design principles and good use of other tools (e.g. IDE, git, etc). That’s why I always recommend starting with personal projects, katas, Meetups or community events where you can play with others and among others practice and learn while minimising any possible risk of working with business environments where there are compromises to be made, difficult problems to solve or any issues that you are not able to solve. Better to start small and once you have been able to pick up the mechanics, have seen how to apply simple design principles, have understood and applied the different schools (and know when to follow one or the other), have seen what mistakes you tend to make and how to fix them, have done it at least in pair/mob-programming with someone else and have done it for a sufficient time, then and only then, I would consider trying it in your team at your company, as long as the team is interested in trying it, practising it, etc. in a natural way.

To learn I usually recommend to start by practising, as I said, doing kata, going to community events and practising with others and at the same time reading at least one of the following books:

– Test-driven development by example (Kent Beck)

– Growing Object-Oriented Software, Guided by tests (Steve Freeman & Nat Pryce) or

– Test-Driven Development, A practical guide (David Astels)

In addition, there is a lot of online content on Youtube, such as Dave Farley’s channel or Jason Gorman’s or James Shore’s that have specific courses to get you started with TDD with Java or Javascript respectively.

Finally, as for the main challenges I have encountered when trying to introduce TDD in teams, buff… I have encountered everything 😅, but perhaps the most repeated is the lack of interest or motivation for its use, for the fact of even seeing the need to try to learn its mechanics or even not seeing the advantages of its use. Lack of motivation or “default rejection” does a lot of harm in trying to get a team to see it as a value or even to put it on the table as a possible “practice” to use in their daily lives. There are many prejudices about the use of TDD (“I already tried it and it’s not for me“, “in this context it won’t work“, “with this codebase it will be impossible, this is not like a kata“), and fighting against that is the first of the barriers to fight against, … even without having almost started.

Dreamix: What is your approach to writing integration tests? Are there rules that you follow when you write such types of tests?

Nacho Cougil: Well, my rule of thumb is to keep their use to the minimum number possible and essential in such a way that they allow us to ensure that, let’s put it this way, “the pieces we use with the framework we are working with” are well connected. For me, every integration test has to serve a purpose and avoid its use indiscriminately, so that if, for example, we already have a test that covers this type of verification, another one is not necessary. A classic example is the fact of having to do cross-layer testing between all layers so that integration with DB, services/use cases and the controller/REST/web layer is tested repeatedly. And the question I always ask myself is, is it necessary to always test the same thing in each of these end-to-end layers? Maybe not, and if so, maybe it is possible to transform integration tests into unit tests between components that can be executed, adapted and improved more quickly. I am not saying that they are not necessary, this is not true, what I do support is to minimise their use to the minimum possible, so that they fulfil their function but not repeatedly. For example, a set of integration tests can be used to prove that the persistence layer is correctly tested, and it is not necessary to re-launch integration tests from the business layer that again connects to the persistence layer to re-trigger the same logic over and over again.

Dreamix: What do you think about the idea of doing TDD with integration tests first to avoid additional refactoring of unit tests?

Context: I have heard from people struggling with TDD that writing first unit tests then code often leads to double refactoring because when they extract or change the code, they need to do the same for the unit tests. So the idea is to write integration tests that check the end result on system behavior without caring for your specific implementation, reducing the necessity to refactor non-integration tests.

Nacho Cougil: I see no harm in building an integration test to help you “steer” your development. In fact, I always try to start from an acceptance test that allows us to trigger the functionality we want to develop and, from then on, design, step by step, the solution we need to solve that need from our tests. This is what is called the double cycle of TDD, starting with an acceptance test that will be in red until we do not develop such functionality and that will help us to “move” towards the internals of our solution, designing other tests and components, building the code that will help us to solve such functionality, etc.

Now, I don’t quite understand this approach of doing an integration test to save me a refactor, when refactoring, as such, should be an ongoing activity, not only for our Production code, but also for our tests. Now, if it’s a matter of “saving changes that I already foresee I’m going to have to make in my unit and integration tests“, then the two tests are highly coupled (which, we know, is a bad idea). Something at the design level doesn’t feel right if you happen to be testing the same thing from 2 different points of view.

The question then becomes, what can be done to avoid this repetition, and why hasn’t the code really been triggered so that it can be activated and triggered before anything else? One of the advantages of TDD is that it is a tremendously effective practice for delivering value in an agile and continuous way. For example, you can start with a test (the most external to your system), so that iteratively you can evolve and deliver value by going “inside” your solution. You can follow a step-by-step approach such that: 

– 1st a hardcoded result is returned, 

– 2nd with a value obtained from another hardcoded piece, 

– 3rd flexible result connected to a data repository, 

– 4th externalised connected to a database, etc. 

and all this… designed by tests, which in turn can be committed and whose code can be pushed to production without problems!? because of course, the code is covered by tests!? Isn’t TDD wonderful 🙂?

PS: For some time now I have been sharing a talk that I think may fit quite well with regard to the whole survey and in particular in relation to this question. I think it is worth taking a look at it and see just what I was talking about, how TDD can be applied from an “external” point of view by designing an acceptance test that will help us to design our solution from the outside to the inside of our system/component. Here you have one of the times I have shared it: TDD: seriously, try it! 😃 (JCon) – https://youtu.be/MTMBpHWkqMg

Dreamix: Do you have any tips and tricks to follow if the team decides to use Test containers?

Nacho Cougil: I am a big fan of TestContainers! So if the team decides to use it, it’s a no-brainer for me, go for it! It’s a really fantastic tool and one that many years ago we couldn’t imagine all that it has brought us. Being able to build an ephemeral container against which you can run some integration tests and verify that “your pieces” are well connected, is, I would say, almost a luxury 😅 (oh, those days of EAR’s deployments!). In this sense, just add a small warning: limit the use to the tests that are really necessary and make use only of the services that are necessary. That is, try to avoid the typical chain of services that need other services for the test to work (e.g. service X -> service Y -> DB, etc) and move towards tests without so many dependencies and with clear boundaries limiting their scope, as with contract testing.

Dreamix: You mentioned that you also have experience in analyzing applications. So let me ask one practical question that many of us have faced before. If I find my application running OOM, how should I go about finding the leak? What technologies and techniques do you recommend?

Nacho Cougil: This one is easy! I work at Dynatrace, a company dedicated to providing a global monitoring service (infrastructure, API’s, DB’s, network, hosts, Kubernetes, vulnerabilities/security). As such we use our own platform to analyse our developments and through it we can get down to the level of the line of code that has caused a problem or error, and we can even do profiling of our API’s and services and see which areas of our code we need to improve. Not only that, but internally we perform specific performance tests, on a regular and programmatic basis, in order to improve usage and optimise resource consumption and minimise costs at the same time.

As for the use of other tools, my recommendation is to use standard and opensource (or paid) tools: nowadays most IDE’s either come with embedded profilers or have a very easy integration with any of them, so it is very easy to use any of the best known ones: Async Profiler, Java Flight Recorder or VisualVM.

Finally, as far as techniques are concerned, I recommend that you first establish a test plan to agree on what you want to achieve and identify which results will be acceptable and which will not. Subsequently, it is probably advisable to develop the appropriate setup to trigger such conditions and under what scenarios and finally analyse the results achieved.

Dreamix: If you were tasked with creating a load test to consistently reproduce slow behavior of a system, how would you do it? What if the application was running with high memory usage?

Nacho Cougil: First of all, I would probably use a tool like Dynatrace (or any other APM type tool) that would allow me to study the exact behaviour of each API, agent, host, etc. and thus find out if the problem lies, for example, in the number of requests that are made between clients and servers, if the network is saturated, or the number of requests that end up reaching the database, or if the volume of objects that are stored in memory is excessive VS the available/reserved heap for each JVM.

From there, I would effectively create any script or code that could be used to trigger such an operation. In this sense, any open source load testing tool could be used for this purpose (e.g. Gatling, K6, Locust, JMeter, etc).

It should be added that, once the parts identified as having excessive memory consumption have been identified, I would recommend analysing and studying these components in more detail in order, if necessary, to examine and test these parts in more detail so that the less optimised parts can be improved.

Dreamix: What is your vision on infrastructure as a code? Do you think Software or DevOps Engineers should maintain it? Also If you have experience with IAC can you share some tips and tricks or tools that can help in that direction?

Nacho Cougil: My view of the whole DevOps movement is that every developer should be DevOps. Every engineer should understand and know what is underneath what is being developed, how X or Y is deployed, how the code they have developed ends up running and what infrastructure it is built on and supported by, as well as knowing the architecture it is on, etc. Is it complicated? Yes. Is it necessary? I think so, because it is not enough for a team/group of people to be completely unconcerned about what happens “underneath” our component/element of which we have ownership and the problem becomes someone else’s. Maybe it is not necessary for the whole team to know how to touch the Terraform scripts that support the whole infrastructure/architecture of a system or platform, but they should know that they exist, how they are used or what they are for, because, maybe in the future they will have to consider changes in this area, right?

Yes, I have had experiences with tools like Terraform working as Infrastructure as a Code and I have to say that the part that I still think needs to be improved is the testing part in this area. Apart from the fact that in many teams there is still no culture of testing the developed code, in this kind of areas there is still a lot of need to introduce tests. I always ask myself, aren’t we interested in checking that our infrastructure is well built? Is it so expensive to check that what is built is done as it should be? and what’s more, are we sure that we are not leaving something out and we are reserving or building some resources that nobody is using or needs? shouldn’t we be able to trace this like any other functionality? Because opensource tools for testing deployment/changes to our infrastructure exist, so why not use them?

Dreamix: Do you believe cloud is the future of IT infrastructure? What is your view on the matter?

Nacho Cougil: Yes, I think so, and that is how I see it today. Even so, I still see large customers whose inertia is still too great with respect to their legacies or legacy that, on the one hand, keep their business moving forward, but on the other hand, prevent them from evolving at a faster and more agile pace, as can be done with a cloud infrastructure. Now, while the cloud certainly brings a number of advantages, it also brings a number of challenges at other levels (e.g. country restrictions, laws/intellectual rights, privacy/data protection, trade secrets, etc.) of which we are not always aware at first instance and whose management and treatment can also be a major challenge. Moreover, other factors also come into play, such as their cost or the “anchorage” to which a company ends up being tied to a single supplier… and this is not a negligible factor either, especially considering the position that some companies want to have with respect to their business and market in particular. As we say in our country: all that glitters is not gold 😅.

Dreamix: How do you update yourself about the latest trends in Java?

Nacho Cougil: Good question! Well, among others, I follow several JavaChampions who usually share news on social networks (Twitter/X, Mastodon, etc) or I take a look at a couple of websites like Foojay ( https://foojay.io ) or Dzone ( https://dzone.com/java ). From there, I always have a multitude of articles to read that I use to review the latest approach to a particular framework/library/technique or to review how someone else has tackled this or that challenge 😉.

Dreamix: Can you recommend a favourite book about programming? What about a favorite book in general?

Nacho Cougil: Well, I can’t say which is my favourite book on programming, perhaps “Working Effectively with Legacy Code” by Michael Feathers, although it is by no means new, it includes a series of key ideas and concepts that I think every developer should have in mind to deal with “difficult” or legacy code. I think it is fundamental for any engineer to carry out any practice that serves to “unlock” code that is a priori complex or difficult to change (or to test). I also like Kent Beck’s classic, “Extreme Programming Explained: Embrace Change”, in which the principles and practices that I believe represent truly agile and effective development in any team are supported.

As for books in general, perhaps one of my favourite books is Le Petit Prince by Antoine de Saint-Exupéry. I think it is a book from which you can always learn something, for example, as an adult, you can re-learn to see things as if you were a child again, with all your innocence, or how to cultivate relationships with others and to overcome painful changes when they happen. I think it is a book with a special touch and very simple and can always be read in the future.

Is there anything else you would like to ask Nacho Cougil? What is your opinion on the questions asked? Who would you like to see featured next? Let’s give back to the Java community together!

Dreamix Team

Innovators by heart. Developers by passion. We’re Dreamix Team - a group of trailblazing techies trying to make the world a better place through technology. We provide custom software development, keep you updated on market and industry trends, and have a great time doing it.

More Posts - Website

Follow Me:
TwitterFacebookLinkedInPinterestGoogle Plus

Do you want more great blogs like this?

Subscribe for Dreamix Blog now!