AWS uVI Workload Migration
KUGU Home is a market leader in digital building management. A core functionality of their platform is the calculation of energy and warm water consumption for the end customers.
KUGU aimed to enhance its service for an expanding customer base by reevaluating its existing functionality. Collaborating with the KUGU team, we redesigned the system focusing on scalability and parallelization.
​
Technologies: AWS, Athena, Iceberg, Lambda, SQS, S3, ECS, DynamoDB
What we did
In our recent project, we undertook a comprehensive overhaul of the methodology used to calculate periodic consumption for each customer. Our innovative approach was to deconstruct the process into clearly defined stages, develop these stages as independent steps, and introduce parallel execution for all customers. Additionally, we made a strategic distinction between transactional and analytical workloads, aiming to optimize efficiency throughout the process.
Challenge
Our initial challenge stemmed from the existing setup, which processed customer data in a sequential manner. The task was to meticulously break down the entire procedure into distinct, manageable steps that could be executed in parallel for different customers at the same time. Moreover, we aimed to strategically categorize this workload into a batch process, ensuring that our operations remained seamless and without any negative impact on customer-facing services.
Solution
-
Data Lakehouse Architecture Implementation: Developed a data lakehouse capable of efficiently processing streaming data, designed to handle up to 10 million records per day with ease.
-
Parallel Processing: Leveraged cutting-edge technologies such as Athena, Spark, Iceberg, and DynamoDB to parallelize the consumption calculations for each customer, significantly boosting processing efficiency.
-
DevOps Automation: Streamlined the deployment process through the automation of all DevOps activities, ensuring smooth and reliable operations.
-
Enhanced Testing Procedures: Introduced automated and load testing to bolster system reliability and user trust.
-
Infrastructure-as-Code (IaC): Adopted an IaC approach to ensure that our infrastructure is well-documented, transparent, and easily replicable.
-
Monitoring and Logging: Implemented comprehensive monitoring and logging solutions to swiftly identify and address potential issues, enhancing system resilience.
-
Automation of Manual Checks: Partially automated the manual plausibility checks to increase efficiency and reduce human error.
Result
-
Capacity Enhancement: Successfully freed up a significant amount of team capacity, enabling a focus on further innovations and improvements.
-
Process Replicability: Established a repeatable process that can be selectively triggered for specific customer segments without the need to restart the entire operation each time.
-
Scalability: Created a process that scales seamlessly with the company's growth, ensuring long-term adaptability and efficiency.
-
Foundation for Future Development: Laid a robust foundation for the next iteration of the platform, paving the way for continuous advancement.
-
Reduced Production Load: Effectively minimized the load on the production environment, resulting in improved service quality for customers.
Scott Williams, CPTO
Data Max demonstrated remarkable expertise in reengineering and monitoring our data pipeline, resulting in a significant improvement in performance and reliability. Their proactive approach, deep understanding of our needs, and exceptional collaboration made them a highly recommended partner for any organization seeking help in data engineering in cloud.
​
By working with Data Max we were able to blend our engineering together to create a seamless and high impactful piece of work. We hope to work with them more in the future.