Build to Last!

There is a saying in my team whenever someone develops a data pipeline – Did you put a bow on it? We all are often juggling multiple things and it feels good to get things checked off from your to do list. But there is a difference between getting something done and doing something really well. The later ensures that whatever you developed is built to last. It also allows you to be free from being the sole person who understands the intricacies of the product. So, we showcase examples of instances where someone put a bow on it i.e., they took the time to deploy something that was built to last.  

The first thing to realize is that creating a structure/system is very important. Teams fail not because of individuals but because of the systems that are in place that allowed certain things to happen. When I first joined my current team, we were a group of 3 analysts and we basically did everything from data sourcing, data normalization, transformations to visualizations and automations. We were recognized for our good work
and speed of delivery. So why change something that has yielded positive results for our team? Unfortunately, as the scope of team grew and as we added more members to our team, things started getting out of control. As a team lead, I found it extremely hard to understand why things were done in a certain manner. We were spending more time constantly putting out fires for jobs that were poorly designed. This was a result of having no consistency in the way things were being developed.

We met as a leadership team and had a 3-pronged strategy to build a high performing team that produces artifacts that are built to last.

  • Defining best practices

How do you know something is good? This can be subjective and especially unclear to team members who have relatively less experience. They might do things in a certain way because they feel that it is the most important thing pertaining to the task. To avoid such confusion, it is critical to define best practices. In our team we have established best practices for writing SQL code, developing alteryx jobs and any other kind of ETLs. When someone new joins our team they can review these guidelines to get up to speed of what the expectations are. Is your process designed to work only for a happy path? How has it been designed to handle errors? Have you left necessary comments in the code so that someone else reading your code can follow your thinking process and make changes if necessary. These are all part of the list of considerations one should be making when building your data pipeline.

  • Code Reviews

Our team is a high performing team, and we are constantly rolling out new development efforts. So, in order to ensure that the standards that were defined are being met it is critical to ensure you have a code review process that is either peer based, or manager based.  The purpose of this is to have an additional set of eyes review your code, check if the standards have been met and ask questions. The last part is key as sometimes we can be so tunnel visioned that we may not have considered something that is obvious to someone else. Having someone who is not so close to the project review your code might reveal some of your blindsides.

  • Recognition

A way to build a strong culture where you are developing high quality products is by recognizing high quality work. In our team, we ask team members to showcase examples of work where they refactored old code to make it more robust and easier to understand. We value such efforts and make sure we recognize any such efforts to reduce our technical debt.  Also, during performance reviews, we do not only look at the volume of work someone has completed, but we also take a look at the quality of work. This means looking at the code and seeing if the person went out of their way to design things that are built to last. We make a concerted effort to make sure we recognize this kind of great work publicly to build that culture within the team.

Leave a comment