• Dask: A parallel data processing python library for large datasets

    While conducting data analytics, we often utilize Pandas to perform specific operations on the data in order to extract valuable insights. Initially, when working on data manipulation, I approached it as a data structure problem and did not make use of any built-in Pandas functions. Later, as I delved deeper into Pandas and explored its functions, I discovered that it was significantly faster than manually iterating over the DataFrame (or using the Pandas apply function, which essentially involves iterating over an axis and applying a function) and performing operations on individual rows. Curious about why these built-in functions were faster, I conducted some research and found that Pandas uses NumPy under the hood, which contributes to its speed. When can convert our dataframe to numpy vectors and perform mathematical operations on these vectors if we want our code to be fast. Writing code to perform these vector calculations is significantly harder when lots of operations are involved and sometimes python functions are easier and faster to implement.

    In a specific use case involving a DataFrame was large, I had to iterate over the DataFrame and perform operations. This significantly slowed down my code. Recognizing the need for optimization, I began exploring ways to make the iteration (or the apply function) faster. While numerous alternatives were available, one of them was notably simple and easy to use and understand: a library called Dask. Dask parallelizes the process by breaking down the DataFrame into multiple partitions and performing operations on them concurrently.

    read more
  • Empowering Scalability: Harnessing the Power of CQRS for High-Performance Systems

    In the world of software architecture and design patterns, the Command Query Responsibility Segregation (CQRS) pattern has gained popularity for its ability to improve system scalability, maintainability, and performance.

    In this blog, we will explore what CQRS is, why it is essential, how to implement it, and the advantages it offers to developers and organizations.

    read more
  • Elevating Your Codebase: An Imperative Role of Alerting and Monitoring

    In the world of software development, where every second counts and user expectations soar, the importance of robust alerting and monitoring within your application or service cannot be overstated. In this blog post, we’ll delve into the critical role these practices play in ensuring the reliability and performance of your applications or services. Furthermore, we’ll explore strategies to improve and standardize alerting and monitoring standards across development teams.

    The Crucial Role of Alerting and Monitoring:

    Alerting and Monitoring serve as the vigilant guardians of your application or service. These practices provide real-time insights into your application’s health, performance, and security. Without them, you’re navigating in the dark, leaving your systems vulnerable to outages and inefficiencies.

    Proactive Issue Mitigation:

    Effective alerting and monitoring systems allow you to catch issues before they escalate. Setting up alerts based on predefined thresholds enables proactive intervention. For example, if your server’s CPU usage exceeds 90%, an alert can trigger a notification, prompting immediate action. This level of proactive monitoring can significantly reduce downtime and service disruptions.

    Vital Steps to follow when configuring Alerting and Monitoring Systems:

    When setting up an alerting and monitoring mechanism in a application or service, it’s essential to consider the below steps to ensure that you choose the right metrics to effectively monitor your system. Vital Steps

    • Data Collection and Storage:
      • Data collection involves gathering data from various sources within your codebase or infrastructure. This data can include system performance metrics, application-specific metrics, logs, and more.
      • Use appropriate data collectors and agents to retrieve and send this data to a central repository or monitoring system.
      • Once data is collected, it needs to be stored and processed effectively. Data storage solutions like databases or time-series databases are used to store historical data.
      • Carefully select the metrics ensuring that they are relevant to your application’s performance and business goals.
    • Data Visualisation:
      • Data visualization is about presenting the collected data in a human-readable format. It helps you understand the system’s behavior, trends, and anomalies.
      • Use visualization tools like Grafana, Kibana, Splunk, Dynatrace, New Relic, and more to create charts, graphs, and reports that display metrics over time.
      • Visualisation mainly aids in identifying patterns, and bottlenecks, allowing you to make data-driven decisions.
    • Alerting and Notification:
      • Alerting is a critical step that involves setting up rules and thresholds to trigger notifications when specific conditions or anomalies are detected.
      • Notification mechanisms such as email, SMS, or integrations with tools like Slack, Opsgenie, PagerDuty, or incident management platforms are used to inform relevant parties when alerts are triggered.
    • Monitoring:
      • Monitoring is the continuous observation of your system’s performance and the responsiveness of your alerting system.
      • Regularly review and refine the metrics, thresholds, and alerting rules to ensure they remain relevant and effective.
      • Use monitoring to proactively detect and respond to issues, reduce downtime and improve system reliability.

    Metrics relevant to application goals, Latency metrics, Error Rates based metrics, Scaling metrics to monitor resource utilization (CPU, memory, network, etc.), and Custom Metrics that address application-specific requirements must be key considerations when selecting appropriate metrics for alerting and monitoring.

    Standardizing Alerting Practices:

    To improve alerting standards across your application or service, consider the following steps:

    • Define clear Objectives: Begin by establishing clear objectives for each alert. Define what constitutes a critical issue and what is merely informational. This clarity helps avoid alert fatigue.
    • Thresholds and Conditions: Always define precise thresholds and conditions for triggering alerts. Make these thresholds data-driven, relying on historical performance data or observed patterns in metrics over time to set realistic and actionable alerts.
    • Escalation Policies: Implement escalation policies to ensure that alerts are routed to the right teams or individuals based on severity levels and time sensitivity. Escalation policies help prevent alerts from getting lost in the noise.

    Importance of Automation and Self-Healing in Streamlining Monitoring practices:

    In the realm of monitoring, automation involves the deployment of tools and scripts that can carry out routine monitoring tasks, data collection, analysis, and responses to certain events or alerts. Automated processes can help improve efficiency, reduce manual errors, and ensure consistent and timely monitoring across various components of the IT environment.

    Self-Healing, on the other hand, involves creating systems or processes that can automatically detect and respond to certain issues without requiring human intervention. Self-healing mechanisms aim to identify common problems and implement predefined solutions to restore or improve system functionality.

    Key Reasons highlighting importance of Automation and Self-healing:

    • Efficiency and Speed: Automation allows for the rapid execution of monitoring tasks, while self-healing systems can automatically resolve common issues which minimizes downtime and increases overall system efficiency.
    • Scalability: The large scale of data can be overwhelming for manual monitoring. Automation allows for scalability, ensuring that monitoring practices can adapt to the size and complexity of infrastructure.
    • Cost Savings: By automating routine monitoring tasks and enabling self-healing mechanisms, organizations can reduce the need for a large, dedicated workforce to manage and respond to alerts which results in improved operational efficiency, reduced downtime, and prevention of financial losses by preventing issues.
    • Focus on Innovation: By automating repetitive tasks, teams can focus on more strategic and innovative projects.

    Security and Compliance:

    Integrating security and compliance checks into your alerting and monitoring processes is paramount. Regularly monitor for security breaches, unusual activities, and compliance violations. This safeguards your application’s integrity and user data. The prerequisites for setting up security and compliance are defining clear policies, Continuous Compliance Monitoring, Incident Response Plan Integration, and Regular Training and Drills. For Example, these play a major role in scenarios like Unauthorized access attempts, Data Ex-filtration Attempts, Application Security Breaches, etc.

    Continual Improvement:

    The landscape of software development is ever-evolving. Continual improvement in these processes is essential for maintaining the health and security of software systems. It involves adapting to changing environments, optimizing resource usage, and aligning with evolving business objectives. By regularly enhancing incident response processes, identifying root causes, and integrating new technologies organizations can ensure early detection of issues and enhance the overall user experience. It also plays a critical role in addressing compliance requirements, fostering efficient collaboration between teams, and enabling proactive risk management. This approach establishes a dynamic and responsive monitoring system that evolves alongside the codebase, promoting resilience, reliability, and long-term success. One major example is to conduct post-incident reviews to learn from past issues and refine your practices.

    Conclusion:

    In the dynamic world of software development, alerting and monitoring are your silent sentinels, guarding your application or service against unforeseen threats and performance bottlenecks. By standardizing these practices you elevate the reliability and resilience of your applications, ensuring they meet the high expectations of modern users.

    Remember, the road to improvement starts with a commitment to vigilance. Invest in robust alerting and monitoring practices, set clear objectives, and adapt as your application evolves. Your users will thank you for the reliability and performance you deliver, and your development teams will operate with greater confidence in the codebase they oversee.

    In the end, it’s not just about code – it’s about the experience you create and the trust you build with your users. Alerting and Monitoring are your allies in delivering exceptional software experiences.

    read more
  • Jetpack Compose: A New Era of Android UI Design

    Introduction

    Creating user-friendly interfaces in Android app development has always been a tough task. But now, Google’s Jetpack Compose is here to change the game. It’s a new toolkit that makes designing Android UIs a whole lot easier. This explanation gives you the inside scoop and practical skills to make stunning user interfaces using code examples.

    In the past, we relied on XML-based layouts for Android apps, which could get pretty complicated. But Jetpack Compose is different. It uses a declarative approach, meaning you tell it how you want your UI to look, and it takes care of the rest. This makes UI development simpler, more efficient, and better suited for the dynamic nature of modern Android apps. So, with Jetpack Compose, you can create beautiful and responsive user interfaces that truly connect with your users.

    read more
  • Mobile App Security Testing: Planning and Initiating Testing

    In today’s digital age, for various purposes like travel, banking, shopping, socializing, entertainment, learning and many more, we rely on mobile apps. However, with the increased dependency on mobile apps, there is an equally growing concern about the security of these applications. A single breach to the application can lead to compromise of user data, financial loss, and this can damage the brand’s reputation. To mitigate these risks, it is essential to plan and execute thorough security testing for mobile applications. Based on my experience, outlined below are the best practices to be followed while planning and strategizing security testing of mobile applications.

    read more
  • Streamlining Service Calls in Salesforce Lightning Web Components (LWC)

    Introduction:

    In the dynamic world of Salesforce development, delivering responsive and data-rich user interfaces is essential and is no longer just a necessity but a competitive advantage. Salesforce, a leader in the customer relationship management (CRM) industry, has continually pushed the envelope in developing tools that facilitate efficient customer service. One such tool that has gained significant traction is Lightning Web Components (LWC). Salesforce Lightning Web Components (LWC) empower developers to interact seamlessly with external services, fetching and displaying data with ease. In this comprehensive blog post, we will delve deep into the world of service calls within Salesforce LWC, exploring how it can transform your customer service operations.

    StreamliningSFDataLoad

    ## 1. The Importance of Service Calls in Salesforce Service calls are the lifeline of customer service in Salesforce. Service calls in Salesforce applications involve fetching or sending data to external sources, such as APIs or databases. Whether it’s retrieving customer information, updating records, or integrating with third-party systems, efficient service calls are crucial for a seamless user experience. Service calls can range from simple inquiries to complex problem-solving tasks. Managing them efficiently is pivotal for a seamless customer experience and streamlined business operations.

    ## 2. Challenges in Traditional Service Call Management

    Before we dive into how Salesforce LWC can transform service calls, let’s take a moment to understand the challenges in traditional service call management:

    • Disjointed Systems:

    Many organizations rely on disparate systems for handling service calls, leading to inefficiencies, data silos, and inconsistencies in customer interactions.

    • Manual Data Entry:

    Traditional systems often require manual data entry, leading to errors, delays, and reduced productivity.

    • Limited Visibility:

    Without a unified view of customer interactions and service histories, it becomes challenging to provide personalized and efficient support.

    • Lack of Automation:

    Automation is key to providing timely responses and routing service requests to the right agents or teams. Traditional systems often lack robust automation capabilities.

    ## 3. Enter Salesforce Lightning Web Components (LWC)

    Salesforce LWC offers a revolutionary approach to solving these challenges and transforming service call management. Let’s explore how LWC can revolutionize service calls in Salesforce:

    • Unified Interface for Service Calls:

    LWC enables the creation of custom components that provide a unified interface for service calls. These components can be embedded in various Salesforce pages, allowing agents to access service call information seamlessly.

    • Real-time Data Updates:

    With LWC’s two-way data binding, any changes made to service call information are immediately reflected in the user interface and vice versa. This real-time synchronization ensures that agents always work with the most up-to-date information.

    • Mobile-Optimized Service Calls:

    In today’s mobile-driven world, Salesforce LWC ensures that service call information is accessible and responsive on a wide range of devices. Field agents and support staff can access and update service call details while on the go, improving productivity.

    • Integration with External Systems:

    Many service calls require integration with external systems or APIs for tasks such as location-based services, inventory management, or order tracking. LWC simplifies these integrations, making it easier to provide comprehensive support to customers.

    • Customization and Automation:

    Salesforce LWC allows for the creation of custom automation rules and workflows. For instance, you can automate the assignment of service calls based on predefined criteria, prioritize urgent issues, or even trigger follow-up actions after a service call is closed.

    ## 4. Implementing Service Calls with Salesforce LWC

    Now, let’s explore how to implement service calls effectively using Salesforce LWC:

    • Create Custom LWC Components:

    Start by designing and building custom LWC components that represent different aspects of service calls, such as customer details, service history, and issue resolution forms.

    • Utilize Lightning Data Service (LDS):

    LDS simplifies data retrieval, caching, and synchronization. It handles data access permissions and ensures that agents always work with the most accurate and up-to-date information.

    StreamliningSFDataLoad

    • Integrate with Apex Controllers:

    For more complex business logic or extensive data manipulation, integrate your LWC components with Apex controllers. Apex controllers allow you to execute server-side operations, validate data, and implement complex business rules.

    StreamliningSFDataLoad

    • Implement Asynchronous Operations:

    In scenarios where service calls involve time-consuming tasks, consider implementing asynchronous operations. This prevents the user interface from freezing and provides a responsive user experience.

    ## PROS

    1. LDS supports sharing rules and field level security.
    2. Record loaded by LDS is cached and shared across all components.
    3. Minimize XMLHttpRequests.
    4. After record changes, notification process is available. You can compare with publish-subscriber model.
    5. It is offline supported.

    ## CONS

    1. LDS is applicable to operate on a particular or single record at once. LDS is not supported for multiple records at once.
    2. LDS is only applicable in Lightning Experience. It is not supported in Lightning Components calling from Visualforce page or Lightning Out. If this Visualforce Page is added into Lightning Experience or in Salesforce Mobile app, then LDS is supported.

    ## 5. Conclusion: Elevating Customer Service with Salesforce LWC

    In a world where customer service can make or break a business, Salesforce LWC emerges as a game-changer. It empowers organizations to streamline service calls, enhance the customer experience, and boost productivity. By creating custom components, leveraging real-time data updates, optimizing for mobile devices, integrating external systems, and automating workflows, Salesforce LWC transforms service call management into a dynamic, efficient, and customer-centric process.

    As businesses continue to evolve, those who harness the potential of Salesforce LWC for service calls will find themselves at the forefront of providing exceptional customer service and staying ahead of the competition. The time to embark on this transformative journey is now, and Salesforce LWC is your trusted guide.

    read more
  • Streamlining Salesforce Data Upload with SFDX in Git and Jenkins Workflow

    Introduction:

    In Salesforce development, deploying components from one org to another is a common practice. However, the standard deployment process often overlooks the need for data uploads or data synchronization between orgs. This blog post presents a robust solution that seamlessly integrates data loading into your deployment pipeline using Git, Jenkins, and the Salesforce CLI (SFDX) command. By automating data upserts, you can reduce manual effort, enhance deployment efficiency, and ensure consistent data across orgs.

    Prerequisites :

    Before implementing the data upload solution, make sure you have the following prerequisites in place:

    ## 1.Understanding Salesforce Data Bulk Upsert and SFDX:

            Salesforce Data Bulk Upsert is a mechanism for inserting or updating records in Salesforce based on a unique identifier field. It is particularly useful when working with large datasets and allows you to efficiently perform operations on thousands or even millions of records. SFDX (Salesforce CLI) is a command-line interface tool that provides a unified experience for managing Salesforce applications and metadata.     
    

    ## 2.Setting Up Git and Jenkins for Salesforce Data Bulk Upsert:

            Before diving into the data bulk upsert process, ensure that you have set up Git and Jenkins for your Salesforce project. This includes creating a Github repository to host your project and configuring Jenkins to automate various deployment tasks.
    

    ## 3.Install Salesforce CLI on Jenkins Server:

            Install Salesforce CLI on the machine where Jenkins is running. The installation instructions vary depending on the operating system of the server. You can find the official installation guide on the Salesforce CLI documentation page.
    

    ## 4.Preparing the Data for Bulk Upsert:

            To perform a data bulk upsert in Salesforce, you need to prepare the data in a suitable format. SFDX supports CSV (Comma-Separated Values) files for data import. Create a CSV file containing the records you want to insert or update, ensuring that               it includes a unique identifier field that matches an existing field in your Salesforce object.
    

    ## 5.Configuring the SFDX Data Bulk Upsert Command:

            The SFDX CLI provides a data command that enables you to perform bulk data operations. To configure the data bulk upsert command, follow these steps:
    
            1.Open a command prompt or terminal and navigate to your Salesforce project directory.
            2.Authenticate with your Salesforce org using the SFDX CLI.
            3.Use the following command to perform the data bulk upsert:
    
                sfdx force:data:bulk:upsert -s <ObjectAPIName> -f <CSVFilePath> -i <ExternalIdFieldAPIName> **
    
        Replace <ObjectAPIName> with the API name of the Salesforce object,
    
                <CSVFilePath> with the path to your CSV file,
    
                <ExternalIdFieldAPIName> with the API name of the unique identifier field.
    

    Integrating SFDX Data Bulk Upsert with Git and Jenkins:

    Now that you have configured the SFDX data bulk upsert command, it’s time to integrate it into your Git and Jenkins workflow:

    Lets us consider the workflow as shown below,

    StreamliningSFDataLoad

      where stage branch is authenticated with Salesforce stage org,
    
            master branch with Salesforce production and 
    
            feature branch is created and merged with stage and then to the Master with every data upload / deployment task. 
    

    ## 1. Update the Jenkile file:

       On your Master and Stage branch,Navigate to your Jenkins file  and  include a build step to execute the SFDX data bulk upsert command.
    
        stage('Data Push to stage') {
           when { expression { return env.BRANCH_NAME == ('stage') }}    
            steps {
                script {
                 sh '''
                    set +x
                    echo "${replace_SFDX_server_stage_key}" > ./server.key
         '''
                    sh "sfdx force:auth:jwt:grant --clientid ${consumer_key} --username ${replace_user_name} --jwtkeyfile './server.key' --instanceurl ${replace_sfdc_host_sandbox} --setdefaultusername"            
                    def packageFilePath = 'zen_booking/zen_package_booking_stage.csv'
      
                    //GET THE FILES CHANGED AS PART OF MERGE REQUEST
    
                    def commitHash = sh(returnStdout: true, script: 'git rev-parse HEAD').trim()
                    def diffCmd = "git diff --name-only ${commitHash} ${commitHash}~1"
                    def changedFiles = sh(returnStdout: true, script: diffCmd).trim().split('\n')
    
                    echo "Changes ${changedFiles} in PR"                  
                    //PERFORM DATA LOAD ONLY IF THE FILE HAS BE ALTERED
                    if (changedFiles.contains(packageFilePath)) {
                        echo "*****************Data Load Started!****************"
                        def result = sh script: 'sfdx force:data:bulk:upsert -s Custom_Object__c -f ./foldername/file_name.csv -i External_Id__c --serial', returnStdout: true
                            echo "$result"
                              def jobIdPattern = Pattern.compile(/-i\s+(\S+)/)
                              def batchIdPattern = Pattern.compile(/-b\s+(\S+)/)
            
                        def jobIdMatcher = jobIdPattern.matcher(result)
                        def batchIdMatcher = batchIdPattern.matcher(result)
                        
                        def jobId = jobIdMatcher.find() ? jobIdMatcher.group(1) : null
                        def batchId = batchIdMatcher.find() ? batchIdMatcher.group(1) : null
    
                            sleep(time: 30, unit: 'SECONDS')
    
                       // Set the Salesforce CLI command with the dynamic Job ID and Batch ID
                          def sfdxCommand = "sfdx force:data:bulk:status -i $jobId -b $batchId"
                      // Execute the command 
                      def response = sh(returnStdout: true, script: sfdxCommand)
                      echo "****************Getting the response for the Data Load*************: $response"                
                    }
                                
                }
            }
        }
    
          Note: Replace Custom_Object__c - with the relevant object API ,
    
                foldername/file_name.csv - with the relevant folder name and the file name where the changes are added,
    
                External_Id__c - External Id field to perform the data upsert
    
                Similary add a stage script for Master.
    

    ## 2. Clone a feature branch and add csv for data upload:

        Clone a feature branch  from the Master and add the folder( same as the foldername as mentioned in the above step) and add the required csv files inside the folder (use the same file_names.csv) and  Commit the CSV files containing the data 
    

    ## 3.Monitoring the success and failure logs:

    After merging the changes from the Feature Branch to Stage/Master, the next step is to verify the data load status in the Jenkins job. Keep an eye on the printed $response message to ensure everything is running smoothly. In case any failures occur during the data load, don't worry! You can quickly navigate to "monitor bulk upload jobs" in the corresponding Salesforce org to find out the reason for the failure.
    

    Advantages:

    Using SFDX Bulk Data Upsert from a Jenkins script offers several advantages for your Salesforce data management workflows:

    ## 1. Automation and CI/CD Integration:

        By incorporating SFDX Bulk Data Upsert into a Jenkins script, you can automate data loading processes as part of your Continuous Integration/Continuous Deployment (CI/CD) pipeline. This ensures consistent and automated data updates during application development and deployment.
    

    ## 2. Efficient Data Loading:

        Bulk Data Upsert leverages Salesforce Bulk API, which is optimized for processing large volumes of data. With Jenkins, you can schedule and execute data upserts at specific times, enabling efficient data loading without manual intervention.
    

    ## 3. Reduced API Usage:

        SFDX Bulk Data Upsert consumes fewer API calls compared to traditional data loading methods. This helps you stay within Salesforce API limits and avoid unnecessary API costs.
    

    ## 4. Scalability and Performance:

        Jenkins allows you to scale data loading processes horizontally by adding more build agents or nodes. This ensures fast and efficient data upserts, even for massive datasets.
    

    ## 5. Error Handling and Reporting:

        Jenkins provides excellent error handling and reporting capabilities. You can set up notifications and alerts to monitor the data upsert process and quickly respond to any issues that may arise.
    

    ## 6. Security and Access Control:

        Jenkins offers robust security features, allowing you to control access to data loading scripts and credentials. You can implement secure authentication methods to protect sensitive data.
    

    ## 7. Consistency and Reproducibility:

        Jenkins ensures that data upserts are executed consistently every time they are triggered. This guarantees reproducibility and eliminates human errors in data loading.
    

    ## 8.Scheduling Flexibility:

        With Jenkins, you can schedule data upserts to run at specific intervals, during off-peak hours, or based on triggers like code commits or other events. This enhances flexibility and optimization of data loading processes.
    

    In summary, leveraging SFDX Bulk Data Upsert from a Jenkins script offers numerous benefits, including automation, scalability, reduced API usage, error handling, security, and seamless integration with other tools. It simplifies and streamlines your Salesforce data loading workflows while ensuring consistency and efficiency in data management.

    read more
  • Java Virtual Threads vs Platform Threads Performance Comparison under high load.

    Platform Thread vs Java Thread.

    Before we begin comparing the performance of virtual threads to platform threads we need to understand the key differences between them. Java’s platform thread is just a wrapper over the OS thread. Since OS threads are managed via the underlying operating system, their scheduling and optimisations are practically inaccessible to JVM. This is where virtual threads comes into action, part of project loom JEP425 virtual threads provide a one-to-one mapping over the underlying OS threads. In this case JVM is responsible to mapping virtual threads to OS threads, and in scenarios where Java App is busy in a non CPU bound task such as Network call or a DB call. JVM can just un-map that thread corresponding to non CPU bound task and free up that OS thread as well, there by allowing full control over scheduling optimisations.

    read more
  • Elevating Code Quality and Speed: Exploring BDD with TDD for Effective software development

    In the fast-paced world of software development we live in today, it’s always a challenge to deliver top-notch applications quickly and efficiently. Thorough testing is an essential part of the development process to achieve that. By combining Behaviour-Driven Development (BDD) with Test-Driven Development (TDD), we can enhance the effectiveness of the testing process and ultimately improve the overall quality of our software.

    In this blog , we will dive into the world of BDD and TDD, exploring their fundamental principles, benefits, and practical implementation techniques

    read more
  • Bug Bashing: A Fun and Effective Way to Improve Software Quality

    What is Bug Bash?:

    A bug bash is a collaborative effort aimed at uncovering a large number of bugs within a short time interval. During a bug bash, various participants beyond testers, such as developers, product managers, engineering managers, designers, marketers, solution engineers, writers, customer support representatives, and even executives like the CTO and CEO, can also join in.

    read more
  • Key Metrics to Assess the Effectiveness of Automation Testing

    Introduction:

    Automation testing plays a crucial role in ensuring the delivery of a high-quality product. While the significance of automation tests is widely acknowledged, it is important to determine how to quantify their impact, assess their value in terms of effort and resources, and measure the success of test automation. Outlined below are metrics that can be employed to evaluate the effect of automation on the overall application quality, along with some best practices to generate data for these metrics.

    read more
  • Implementing dynamic filtering on joined queries using JPA specification and criteria API

    Introduction:

    In most web applications, we would have come across a requirement to filter, sort, and paginate the data by joining multiple tables. If you are using Spring JPA then there are multiple ways to retrieve and paginate the data on joined table.

    1. Writing native queries.
    2. Using JPQL query.
    3. Using inbuilt repository methods.

    These options are good for the use case where we will have to construct the query based on a fixed set of conditions/where clauses. We can’t add where clause at runtime using these options.

    Spring JPA also provides the capability to generate dynamic SQL queries with help of Criteria API and Specification. In this article, we will see how we can leverage JPA Criteria query support to build generic specifications which can retrieve rows from joins on multiple tables with sorting and pagination.

    read more
  • Performance testing with Gatling

    Measuring the performance of an application is critical to business expansion and growth. One of the ways to achieve this is through load and performance tests. Load Testing ensures that your application can perform as expected in production. Just because your application will pass a functional test, this does not mean that it can perform the same under a load. Load testing identifies where and when your application breaks, so you can fix the issue before shipping to production.

    What is load testing?

    Load testing is a mechanism which helps us to identify the performance bottleneck in a website and take corrective measures which brings in a positive user experience. Be it a micro-service, a REST service or a website, load testing can help to identify the culprit bringing down the performance of the application and give an indication of optimal resources required to run smoothly.

    read more
  • Unraveling Patterns: Exploring the Fascinating World of Clustering Algorithms

    Clustering is a popular technique in machine learning used for grouping data points based on their similarities. It is a type of unsupervised learning method where there is no predefined output variable or label. Instead, the algorithm attempts to discover patterns and structure within the data by grouping similar data points.

    read more
  • Introduction To Object Relational Mapping in Java

    Programmers using object-oriented programming languages often struggle with integrating the database structure with their code because relational databases use tables to represent data, while object-oriented languages like Java use connected objects.

    OOP developers also face the challenge of connecting their application to a relational database using structured query language (SQL), which can be time-consuming and require understanding of raw SQL coding. SQL query builders provide a layer of abstraction that helps to simplify the process and provide more information about the data.

    Object-Relational Mapping (ORM) is a programming technique that enables software developers to work with object-oriented programming languages, such as Java, Python, or Ruby, to interact with relational databases, such as MySQL, Oracle, or Microsoft SQL Server.

    ORM is an abstraction layer that helps to bridge the gap between the object-oriented programming paradigm and the relational database model. It allows developers to use objects to represent database entities, such as tables, rows, and columns, and to manipulate them more naturally and intuitively.

    read more
  • Efficient Data Management in React with React Query

    React Query is a lightweight library for fetching, caching, and updating asynchronous data in React applications. It is designed to work with a variety of data sources, including REST APIs and GraphQL endpoints.

    It is used for managing, caching, and synchronizing asynchronous data in React applications. It makes it easy to work with asynchronous data and API requests in React.

    read more
  • What is Swagger, and how does it work?

    In today’s software development world, the software development lifecycle depends on APIs. If you are a software developer, you must be integrating APIs into your application.

    APIs (Application Programming Interfaces) allow us to expose the data and functionality to the public for use, and then we can add that data to the applications we build.

    Swagger is one of the most popular tools for web developers to document REST APIs.

    read more
  • Must Know Concepts of System Design

    System design is the process of defining the architecture, interfaces, and data for a system that satisfies specific requirements. It requires a systematic approach and requires you to think about everything in infrastructure, from the hardware and software, all the way down to the data. These decisions are required to be taken carefully keeping in mind not only the Functional Requirements but also the NFRs Scalability, Reliability, Availability, and Maintainability.

    read more
  • Mocking with Mockery in Golang

    Adding unit tests is paramount when it comes to ensuring the reliability and maintainability of the code written. It not only serves as a safety net to catch bugs early, but also safeguards future integrations. Pertaining to writing unit tests, mocking becomes an even more indispensable tool. Mocking facilitates isolation of specific components and emulating their behavior, creating controlled environments for testing without relying on external dependencies. We will be exploring how to create mocks using Mockery, a popular mocking library for Go.

    What is Mockery?

    Mockery is a tool that generates mock implementations of interfaces. Please do keep in mind that the interfaces need to be present, to be able to be mocked. Mockery heavily relies on interfaces to substitute real implementations with mocks. It enhances modularity, enables test isolation and facilitates dynamic behavior replacement during testing. We will understand more on this with the example mentioned later.

    Github link can be found in Mockery.

    Salient Features of Mockery

    • Automatic mock generation: Mockery automates the process of creating mock implementations for interfaces. You simply provide the interface you want to mock and Mockery generates the corresponding mock implementation. This can save a lot of time and effort, especially when dealing with interfaces that have numerous methods.

    • Simple command-line interface: Being a command-line tool, its usage is pretty straightforward. Just run a command with desired options, you will have your mocks generated. Not only that, we can customize various aspects too. We will understand further when we deep dive on the options/flags.

    • Support for testify/mock assertions: The generated mocks can easily be integrated with the assertions provided by the ‘testify/mock’ package, allowing us to seamlessly use ‘On’, ‘Return’, ‘AssertExpectations’ etc.

    • Mocking unexported interfaces: Mockery supports generation of mocks for unexported interfaces. This is quite useful when you have internal interfaces that you want to mock for testing purposes, but you don’t want to expose them to the external API of your package.

    Getting started with Mockery

    Before starting, we need to make sure to have Mockery installed. We can do it using below command :

    go get github.com/vektra/mockery/v2/../
    

    Once installed, we can use Mockery to generate mocks for the interfaces.

    Mockery Commands

    1. Generate Mocks for All interfaces

    To generate mocks for all interfaces in the current directory and its subdirectories

    mockery --all
    

    2. Generate Mocks for a Specific interface

    To generate mocks for a specific interface, we can use name flag

    mockery --name InterfaceName
    

    3. Specify output directory

    By default, Mockery generates mocks in the ./mocks directory. We can use output flag to specify different output directory.

    mockery --all --output path/to/output
    

    4. Include subdirectories

    Include subdirectories when generating mocks, we can use recursive flag.

    mockery --all --recursive
    

    5. Specify package name

    Use the keeptree flag to preserve the directory structure when generating mocks. This can be useful to maintain the same package structure in mocks directory.

    mockery --all --keeptree
    

    6. Generate Mocks for a specific package

    Create mocks for interfaces defined in a specific package. Handy in combination with output flag.

    mockery --all --dir path/to/package
    

    7. Define Output Package Name

    To set custom package name for the generated mocks, we can use outputpkg flag.

    mockery --all --outputpkg customMocks
    

    Example: Using Mockery to test an interface

    Let’s consider we have an interface DataProvider which represents an external service to fetch data. In our case, we will just return a random number between 0 and the number passed.

    // data_provider.go
    package service
    
    // DataProvider is an interface for fetching and retrieving data.
    type DataProvider interface {
    	GetRandomNumber(id int) (int, error)
    }
    

    Let’s create the implementation of the interface DataProvider to return a random number.

    // data_provider_impl.go
    package service
    
    import (
    	"errors"
    	"math/rand"
    )
    
    // DataProviderImpl is the concrete implementation of the DataProvider interface.
    type DataProviderImpl struct{}
    
    // GetRandomNumber simulates fetching a random number between 0 and id.
    func (d *DataProviderImpl) GetRandomNumber(id int) (int, error) {
    	if id < 0 {
    		return 0, errors.New("Invalid id")
    	}
    	// Simulate fetching a random number between 0 and id
    	return rand.Intn(id + 1), nil
    }
    

    Now, we need to consume the DataProvider to get the random data. Let’s create that. Additionally, we will check if the random number we fetched, is even or odd.

    // data_consumer.go
    package service
    
    // ConsumeData is a function that uses a DataProvider to fetch and process data.
    func ConsumeData(provider DataProvider, id int) (string, error) {
    	// Use GetRandomNumber to get a random number between 0 and id
    	randomNumber, err := provider.GetRandomNumber(id)
    	if err != nil {
    		return "", err
    	}
    
    	// Check whether the value is even or odd
    	result := checkEvenOrOdd(randomNumber)
    
    	// Return the result
    	return result, nil
    }
    
    // checkEvenOrOdd checks whether the given value is even or odd.
    func checkEvenOrOdd(value int) string {
    	if value % 2 == 0 {
    		return "Even"
    	}
    	return "Odd"
    }
    

    Now, let’s create the mocks for DataProvider interface using Mockery. We will use below command to do so.

    mockery  --output ./mocks --dir ./service --all
    

    On running the command, you will see below execution. What this will do, is create a DataProvider.go inside mocks package since we have specified –output ./mocks in the mockery command.

    22 Jan 24 08:33 IST INF Starting mockery dry-run=false version=v2.36.1
    22 Jan 24 08:33 IST INF Using config:  dry-run=false version=v2.36.1
    22 Jan 24 08:33 IST INF Walking dry-run=false version=v2.36.1
    22 Jan 24 08:33 IST INF Generating mock dry-run=false interface=DataProvider qualified-name=RandomizedEvenOdd/service version=v2.36.1
    22 Jan 24 08:33 IST INF writing mock to file dry-run=false interface=DataProvider qualified-name=RandomizedEvenOdd/service version=v2.36.1
    

    Auto generated mocked DataProvider.go will be like below. Please make sure to re-generate mocks whenever there is an update in the interfaces. For example, we add a new function in the interface.

    // Code generated by mockery v2.36.1. DO NOT EDIT.
    
    package mocks
    
    import mock "github.com/stretchr/testify/mock"
    
    // DataProvider is an autogenerated mock type for the DataProvider type
    type DataProvider struct {
    	mock.Mock
    }
    
    // GetRandomNumber provides a mock function with given fields: id
    func (_m *DataProvider) GetRandomNumber(id int) (int, error) {
    	ret := _m.Called(id)
    
    	var r0 int
    	var r1 error
    	if rf, ok := ret.Get(0).(func(int) (int, error)); ok {
    		return rf(id)
    	}
    	if rf, ok := ret.Get(0).(func(int) int); ok {
    		r0 = rf(id)
    	} else {
    		r0 = ret.Get(0).(int)
    	}
    
    	if rf, ok := ret.Get(1).(func(int) error); ok {
    		r1 = rf(id)
    	} else {
    		r1 = ret.Error(1)
    	}
    
    	return r0, r1
    }
    
    // NewDataProvider creates a new instance of DataProvider. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
    // The first argument is typically a *testing.T value.
    func NewDataProvider(t interface {
    	mock.TestingT
    	Cleanup(func())
    }) *DataProvider {
    	mock := &DataProvider{}
    	mock.Mock.Test(t)
    
    	t.Cleanup(func() { mock.AssertExpectations(t) })
    
    	return mock
    }
    

    With the mock generated, let’s use it to write test cases for ConsumeData which has an external dependency on DataProvider.

    // data_consumer_test.go
    package test
    
    import (
    	"EvenOdd/mocks"
    	"EvenOdd/service"
    	"errors"
    	"testing"
    
    	"github.com/stretchr/testify/assert"
    )
    
    // TestConsumeDataForSingleInput tests the ConsumeData function with a single input.
    func TestConsumeDataForSingleInput(t *testing.T) {
    	// Create an instance of the mock
    	mock := &mocks.DataProvider{}
    
    	// Set the expected return value for the GetRandomNumber method
    	mock.On("GetRandomNumber", 5).Return(3, nil).Once()
    
    	// Call ConsumeData that using the mocked DataProvider
    	result, err := service.ConsumeData(mock, 5)
    
    	// Assert that the result and error are as expected
    	assert.Equal(t, "Odd", result)
    	assert.NoError(t, err)
    
    	// Assert that the GetRandomNumber method was called with the expected input
    	mock.AssertExpectations(t)
    }
    
    // TestConsumeDataForMultipleInputs tests the ConsumeData function with multiple values.
    func TestConsumeDataForMultipleInputs(t *testing.T) {
    	// Create an instance of the mock
    	mock := &mocks.DataProvider{}
    
    	// Set the expected return values for the GetRandomNumber method
    	mock.On("GetRandomNumber", 20).Return(10, nil).Once()
    	mock.On("GetRandomNumber", 30).Return(15, nil).Once()
    	mock.On("GetRandomNumber", 10).Return(5, nil).Once()
    
    	// Set the expected return value for an error scenario
    	mock.On("GetRandomNumber", -1).Return(0, errors.New("Invalid id")).Once()
    
    	// Added multiple inputs for testing
    	testCases := []struct {
    		input int
    		want  string
    		err   error
    	}{
    		{input: 20, want: "Even", err: nil},
    		{input: 30, want: "Odd", err: nil},
    		{input: 10, want: "Odd", err: nil},
    		{input: -1, want: "", err: errors.New("Invalid id")},
    	}
    
    	for _, tc := range testCases {
    		result, err := service.ConsumeData(mock, tc.input)
    
    		// Assert that the result and error are as expected
    		assert.Equal(t, tc.want, result)
    		assert.Equal(t, tc.err, err)
    	}
    
    	// Assert that the GetRandomNumber methods were called with the expected inputs
    	mock.AssertExpectations(t)
    }
    

    The first test case is with a single input. Whereas, if we want to write a test using multiple inputs, we can do it in a similar way as the second test case. Also, in second test case, I have added error scenario as well.

    To run the test, go inside test package and run command (go test) :

    go test
    
    PASS
    ok      RandomizedEvenOdd/test    0.400s
    

    Voila! There you go!

    In this example, the mock is used to simulate the behavior of the DataProvider interface, allowing us to control the output and assert that the interaction with the external dependency is as expected.

    By incorporating Mockery, we can easily create maintainable and effective mocks for the interfaces, allowing us to write thorough tests for our Go code.

    Code github link can be found in RandomizedEvenOdd.

    Hope this helps in getting started with Mockery!

    read more
  • AWS Network Firewall for Egress and Ingress filtering

    What is a Firewall?

    A firewall is a system built to protect private networks from unauthorized and unverified access through an internet connection. Firewalls can be either in the form of hardware or software - or a combination of the two.

    What is AWS Network Firewall?

    AWS Network Firewall is a stateful, managed, network firewall and intrusion detection and prevention service for your virtual private cloud (VPC) that you created in Amazon Virtual Private Cloud (Amazon VPC).

    With Network Firewall, you can filter traffic at the perimeter of our VPC. This includes filtering traffic going to and coming from an internet gateway, NAT gateway, or over VPN or AWS Direct Connect. Network Firewall uses the open source intrusion prevention system (IPS), Suricata, for stateful inspection. Network Firewall supports Suricata compatible rules.

    AWS Network Firewall provides network traffic filtering protection for your Amazon Virtual Private Cloud VPCs. This tutorial provides steps for getting started with Network Firewall using the AWS Management Console.

    Additionally, AWS Network Firewall provides extra features including deep packet inspection, application protocol detection, domain name filtering, and an intrusion prevention system. WAF, in contrast, can’t handle these features because it works on a different open systems intercommunication (OSI) model layer.

    read more
  • DnD with React DnD for better UX

    Drag and Drop (DnD) is a user interface concept using which one can select an object/element in the viewport and can move it (ie drag) to a desired location on the view port (ie drop).

    Drag and Drop (DnD) makes it easy to copy, move, reorder and delete items with the help of mouse clicks. It is much more intuitive from the point of user experience than other user actions which may require clicking, typing etc. Often, the UI gets simpler and cleaner on account of using drag & drop.

    Some of the different use-cases include uploading files, rearranging data in a table/section like in a Trello board, matching values in one section to an other value in another section.

    read more
  • Provider: State Management in Flutter

    State Management, be it in Android or iOS, is the most important part in any app development project. By managing state you make sure that data reaches where it is required.

    State management is a strategic approach to organising all user interactions such that the created system state may be changed if any changes are needed to the user interface, databases, or other parts of the system.

    Flutter has a built-in state management mechanism called setState(), however utilising it has the big downside of re-rendering the whole user interface (UI). The main flaw with this system is that even when a portion of the user interface doesn’t need to be re-rendered, it still is, which reduces efficiency and may result in high render latency or janky behaviour. Also we aren’t able to separate views and business logic with this approach so a better approach is to use Provider for state management.

    read more
  • Distributed Logging & it's best practices

    A log is perhaps the simplest possible storage abstraction. It is an append-only, totally-ordered sequence of records ordered by time.

    As we all know Distributed system is composed of several applications calling each other to complete one operation. We might need to talk to multiple services running on different machines to fulfil a single business requirement. So, the log messages generated by microservices are distributed across multiple hosts.

    What’s much harder is to make sense of this ocean of logs from a logical point of view. Here is what centralized logging comes to the rescue.

    read more
  • Elasticsearch-A distributed, free and open search and analytics engine

    The Anatomy of a High-Performance Search Engine: Understanding Elasticsearch’s Key Components and Design Choices

    Elastic Search is a powerful and versatile search engine that allows users to search, analyze, and visualize large volumes of data in real-time. With its ability to handle complex queries and provide fast and accurate results, Elastic Search is widely used by organizations and businesses to index and search through vast amounts of data.

    read more
  • 10 Javascript hacks to become a pro

    Javascript is a very popular & widely used programming language that can do a lot of amazing things!  Check out these features/tips that will help you code like a pro.

    read more
  • PostgreSQL: A simplified guide to locking concepts

    Recently, I have experienced certain problems while executing multiple transactions concurrently on top of Postgres while working on my everyday tasks in SIXT. This persuaded me to write this blog. While Postgres is amazing at running multiple transactions at the same time, there are a few cases in which Postgres need to block a transaction using a lock. One has to be careful about which locks a transaction should acquire and the high-level abstractions provided by Postgres are difficult to understand. With this Blog, I will try to demystify the locking behaviours in Postgres and give advice on common issues faced.

    PostgreSQL has amazing support for executing complex, concurrent and ACID transactions. To make sure that concurrent/complex transactions run perfectly Postgres uses several layers of locks to serialise changes to critical sections of the database.

    read more
  • React 18 - useTransition vs useDeferredValue

    React 18 focuses more on the user experience and performance. There are quite a few features that are introduced in this version like Automatic Batching, Concurrent features, New Hooks, New Suspense features, Strict mode behaviours.

    For this article lets concentrate on two new hooks(useTransition, useDeferredValue) that are introduced and most importantly when to use them?

    read more
  • Hypothesis testing

    Decisions can be made using gut feeling (instinct, guess, idea) or data driven, a simple example could be rolling 2 dice’s (cube) and predicting a single number (sum of the individual dice number), based on your gut feeling you might come with a single number but based on data driven decision you might choose 7 which has the highest probability. There could be many instances where decisions based on gut feeling would have given 100% success, but going with data driven decisions you can reduce the chances of failure. And the core of data driven decision making is Hypothesis testing.

    What is Hypothesis testing?

    “A statistical hypothesis test is a method of statistical inference used to decide whether the data at hand sufficiently support a particular hypothesis. Hypothesis testing allows us to make probabilistic statements about population parameters.” -Wikipedia

    So basically, Hypothesis testing is like proving the gut feeling wrong based on the data which we collect :).

    read more
  • Getting Started Hibernate Envers

    In any application, auditing means that we track and log every change in all business objects, i.e, track each insert, update and delete operations.

    Basically it involves tracking three things -

    1. What operation was performed?
    2. Who did it?
    3. When was it done?

    Auditing helps us in maintaining history records, which can later help us in tracking user activities.

    Hibernate Envers is a project that seeks to make auditing persistent classes simple. You only need to annotate your persistent class or a few of its attributes with @Audited if you wish to audit them. A table will be generated that contains the history of the changes made to each audited entity. Then, retrieving and analysing past data won’t take much work.

    read more
  • CSS Combinators — A type of complex CSS selectors

    Today, in this blog, we’re going to learn something very basic in CSS yet very useful and a must-know topic. It’s CSS Combinators !!

    We will understand CSS Combinators by covering the different types, with an example of each type, and will also learn their importance, by combining them with other selectors.

    read more
  • Web Workers API: Multithreading in JavaScript

    If you have been using Javascript, you may know, that it is a single-threaded scripting language that works within HTML files. This means only one statement is executed at a time.

    However, we are also aware that JS has asynchronous behavior with which we can achieve concurrency. But, Asynchrony is not multithreading, as it is also dependent on a single thread. To some extent, even asynchronous operations block the DOM rendering on the browser.

    Multithreading enables processing of multiple threads at once, rather than multiple processes. By incorporating multithreading, programs can perform multiple operations at once.

    read more
  • How to run mobile tests in parallel using Appium?

    In recent years there has been increase in the growth of mobile applications across different business verticals. Most organizations are targeting to convert their user base into app users and as such there is an increase in mobile app development across platforms with android and iOS being the most popular choice.

    With this increasing trend of mobile application development comes the bigger challenge of testing the mobile app to cover all customer scenarios as well as business use cases. Testing a mobile app poses its own set of challenges some of the common ones being:

    1. Testing on various models and various versions of the OSes to ensure the app is working correctly on all supported platforms and devices
    2. Frequent release cycles which demands such a testing is repeated very often
    read more
  • Elasticsearch Rollover Index — Automated way to get rid of old timeseries based data

    In today’s world, applications and services churn out a huge amount of data continuously. And if you’re dealing with time-series data like logs or audit data points then you need to consider how to get rid of it after it becomes old and unwanted. The best way to achieve this is using Elasticsearch Rollover Index.

    What is a Rollover Index?

    When a new Elasticsearch index is created automatically for write operations as soon as the previous one becomes passive, it’s called a rollover index.
    Only the newly created index will be available for write operations and the older indices will become read-only. All these indices have to be under one alias so as to allow the user to read and write on the same Elasticsearch alias without worrying about handling multiple indices.

    read more
  • Introduction to Test Containers: The Beginner's Guide

    What do you think is worse? Testing your service with in-memory databases which would never go live on production or testing your service using mock data, the one which would come from a file or maybe a mocking framework. Nevertheless, in both cases who would test your assumptions and what if the implementation details change for the data you use in production, how will you ensure your mock data is correct? If you think neither of these is a good choice, then there’s a third option. What if you could test your service locally with much ease? If your answer is “YES” to the previous question, let me introduce you to the world of TestContainers.

    What are TestContainers?

    As per the official documentation, “Testcontainers is an open source framework for providing throwaway, lightweight instances of databases, message brokers, web browsers, or just about anything that can run in a Docker container.” They are essentially “Wrap around an API around a docker” which means you need to have docker installed on your machine. It allows us to run docker containers directly and are used in software testing to facilitate the creation and management of disposable containers for running tests. The necessary docker containers will be spun up for the duration of the tests and tear down once the test execution has finished, hence the term “disposable installation”.

    This is the official website of this framework : https://www.testcontainers.org

    Why do we need them?

    Before I can get into TestContainers, let me introduce the “test pyramid”, a concept concerning the separation and number of tests for software. Unit testing, the base of the pyramid, in my opinion, does not require any further discussion. There has been so much written on the unit testing technique and tools that have been developed. Let’s talk about the integration testing stage.

    Test pyramid

    The main purpose of integration tests is to check how our code will behave while interacting with other services. The dependency on installed components where the integration tests are designed to execute is a common issue while designing integration tests.

    Traditionally, people use in-memory databases(example H2) to test their services. The major problem with it is that clients do not use H2 and test cases do not guarantee our code will work well with databases such as MongoDB or Postgres. Let us suppose, we are migrating from one version of postgres to another, and we are not sure if our changes are backward compatible. Test cases written against H2 will not catch bugs that occur due to incompatibility. We can easily overcome this using TestContainers and all we need to do is whenever we run a container we tie its version to whatever we are running on production.

    Ultimately our aim would be to have a completely clean, up-to-date reproducible database instance which is meant solely for our tests. Docker containers are one of the most popular ways to set up a test environment on a continuous integration pipeline in advance for the services required to execute the tests. Now we have a library, known as TestContainer which compliments the integration testing well.

    How to use the Test containers library in a spring boot project?

    I will be discussing the basic setup of TestContainers(say postgresql) with JUnit 5. First, we need to install the required dependencies required for our use case-

    testImplementation 'org.testcontainers:testcontainers'
    testImplementation 'org.testcontainers:junit-jupiter'
    testImplementation 'org.testcontainers:postgresql’
    

    Next we need to understand two important annotations which might be required in our tests - @TestContainers and @Container. They tie the life-cycle of the container to the life-cycle of our tests. With the help of @TestContainer, test-related containers may be started and stopped automatically. The individual container life-cycle methods are called after this annotation locates the fields that have the @Container annotation.

    In the example below, life-cycle procedures for the Postgres container will be used -

    @Container
    private static PostgreSQLContainer database = new PostgreSQLContainer<>("postgres:12.9-alpine");
    

    Here we used the keyword “static” which acts like before-all and after-all and container starts once per all test methods in a class. If we do not use the keyword “static” it acts as before-each and after-each and the container starts for every test method in a class.

    What happens behind the scenes?

    Whatever we write in @Container is being essentially converted into a docker command.

    Working of test containers

    ● The TestContainers library establishes a connection to the machine’s running docker daemon first (The docker daemon or dockerd listens for docker API requests and manages docker objects such as images, containers, networks, and volumes).

    ● Next, it searches for a postgres container using the image properties specified in the test.

    ● It lets the daemon pull the docker image with the required version from the official docker hub registry if it is not already present, otherwise it pulls the image from the local cache.

    ● The postgres container will then be started by the daemon, which will then alert TestContainers that it is ready for use. The daemon also sends some of the container’s properties. These properties include the host name and port number.

    Note : The default port for postgres is 5432 and it uses this port internally within a container which would be mapped to a high level random port on our local machine. 
    Random ports are generated at run-time to prevent port clashing.
    

    ● The application has access to the container’s properties, which it can use to establish a connection with it. To check for which host and port to connect on we can ask the postgres container which port to connect to.

    Conclusion

    TestContainers is a powerful tool that can simplify the process of setting up testing environments without having to worry about configuring complex infrastructure or need to install the dependencies. It makes our integration tests a lot more reliable. The only downside is that these tests are slow compared to in-memory approaches as it takes time to pull the image and start the container. But in my opinion, the advantages outweigh the disadvantages.

    You can also visit the GitHub repository for more implementation details. Thanks For Reading!

    read more
  • A technique to achieve test automation within the sprint

    In my experience, most of the QA teams won’t attempt to create automation scripts for the features within the same sprint in which they build/develop, and these two activities (creating the feature and automating them) together can easily expand beyond the sprint life cycle.

    First of all, we need to understand why we need in-sprint automation. This will introduce risk as well as technical debt. Any functional testing done within the life cycle of the sprint usually focuses on the feature story itself and not on complete regression of other parts of the product. The automation task for the current sprint features goes to the backlog or for future sprints.

    And, thanks to in-sprint automation, we can easily end all of the preceding points. The benefits are:

    ⁃ Most of the time, the development processes get delayed due to the time the QA team needs for testing. A feature needs to be tested along with others to ensure seamless integration. With the help of In-Sprint automation, we can save time and ensure there are no leftovers or backlogs for the current sprint.

    ⁃ With Insprint automation, we can ensure the business can achieve their development goals in the first build and the ongoing adjustments will require less time since there are no repeat processes involved.

    read more
  • OKR Learning Path

    When the company decided to go with OKRs in order to strengthen customer centricity throughout the whole organization, there was a huge lack of knowledge about the topic. We SIXT Agilists were asked to support this transition by teaching divisions and the new role of OKR Champions how to start and run OKRs.

    First, we had to train ourselves. Some of us Agilists had already attended OKR classes - now it was about internal knowledge transfer. We pulled back ourselves and learned about what OKRs are created for, about the history, the values, the idea behind, the differences between OKRs and KPIs … in short: We gathered everything which might be relevant to understand and master the topic. The result was great: We all managed to get a certification as OKR Professionals. With all the gathered knowledge we were ready to teach the world. 

    read more
  • Automatic dependency updates with Dependabot

    As the project grows, the number of dependencies used in the project grows too. It’s very crucial to maintain them to have a state-of-the-art product. There is no way in Android Studio that can manage these updates for us. You have to do it manually. Usually, the process involves checking if a new dependency version is available, checking what’s new or reading changelogs, and then bumping the version and creating a PR.

    We were doing exactly the same. It was a tedious task for us. That’s when I decided to automate this process. During research I got to know about Dependabot - version update system. This article is about how to automate dependency updates using Dependabot and how to handle its limitations.

    read more
  • Golang's Atomic

    Golang is a language which excels at parallelism, with spinning up new goroutines being as easy as typing “go”. As you find yourself building more and more complex systems, it becomes exceedingly important to properly protect access to shared resources in order to prevent race conditions. Such resources might include configuration which can be updated on-the-fly (e.g. feature flags), internal states (e.g. circuit breaker state), and more.

    read more
  • SIXTtech @ hackaTUM 2021

    We are a proud sponsor of the 2021 edition of the hackaTUM. It’s the official hackathon of the Department of Informatics of the Technical University of Munich. It takes place from Friday, 19th November til Sunday, 21th November. Unfortunately because of COVID-19, it’s an online-only event.

    read more
  • Public Service Announcement on Slack Webhook Security

    While experimenting with different tools for detecting hard-coded credentials, we noticed that some (like GitHub Advanced Security) point out Slack webhooks if they appear in code. At first, we mostly ignored those since they seemed like fairly low risk if any at all. Then, just for fun, we added the pattern to our own home-grown scanner

    read more
  • The Bot Saga

    There comes a time in the lifespan of all large websites when bot traffic becomes an issue on some scale or another. Sometimes you get bombarded with scrapers and your servers can’t handle the load. Sometimes malicous users attempt to brute force security-related endpoints. Sometimes bots drop spam content into input fields. Regardless of the usecase, eventually the problem grows enough that it needs to be addressed somehow.

    This happened to us, and here’s the long road we traveled.

    read more

subscribe via RSS