Cure. fit takes a holistic approach towards health and fitness by bringing together all aspects of a healthy lifestyle on a single platform. Cure.fit offers both digital and offline experiences across fitness, nutrition, and mental wellbeing through its 3 products, i.e, cult.fit, eat.fit & mind.fit.
The Cure.fit company was having a difficult time managing the vast amount of data that they received daily. They were putting a lot of resources, effort, and time to handle their data but the throughput was not enough. The company then turned to Hevo Data for a solution and their daily troubles were suddenly a thing of the past.
This blog will introduce you to the problem that Cure,fit was facing and how it affected their performance. Afterward, the blog will describe how Cure.fit implemented Hevo’s Data Pipeline and the benefits that they experienced. Read along to understand, how Hevo Data can enhance your business!
Table of Contents
- The Problematic Scenario at Cure.fit
- Implementing Hevo’s Data Pipeline as the Solution
- Benefits of using Hevo Data
The Problematic Scenario at Cure.fit
“We would literally slog to generate the reports needed by the Business teams making us the bottleneck” – Swati
The company was collecting loads of data generated by their app, offline centers, Databases like MongoDB, MySQL, systems like Google Analytics, CleverTap, Freshdesk, Mixpanel, and more. But, all this data was inaccessible by the majority of the teams. They were battling the same bottleneck problem many companies confront: A situation where only a few people can access the data, and the rest will have to wait to get their questions answered. This put the Data Platform team under the limelight.
Swati, Cure. fit’s Lead Engineer owned the entire lifecycle of Data, from raw to analytics. Here’s what the Data team’s processes looked like.
- The team would gather data from multiple sources, write complex scripts to transform data, build data pipelines and move data to their warehouse, Redshift.
- They would then run a custom python script on Redshift to pull data, build reports and email this across to the business teams.
- They would also handle all the changes/exceptions that occur at the source or destination to ensure data is flowing into the warehouse and with no data loss.
The data teams were literally swamped with tons of data requests drowning them in an array of data pulls all day long.
“As a growing company, the engineering team would roll out new experiments regularly, causing changes at the source’s end. This made us stay on our toes at all points as we had to handle any exception or the data would not flow. The majority of the team’s bandwidth would be taken by data cleaning and transformation processes as certain systems like MongoDB are tricky to handle. This gave very little room for us to do more with Analytics. We threw our hands up. We needed to do something about this! ” – Swati
Simplify your Data Analysis with Hevo’s No-code Data Pipeline
Hevo Data, a No-code Data Pipeline helps to Load Data from any data source such as Databases, SaaS applications, Cloud Storage, SDK,s, and Streaming Services and simplifies the ETL process. It supports 100+ data sources and loads the data onto the desired Data Warehouse, enriches the data, and transforms it into an analysis-ready form without writing a single line of code.
Its completely automated pipeline offers data to be delivered in real-time without any loss from source to destination. Its fault-tolerant and scalable architecture ensure that the data is handled in a secure, consistent manner with zero data loss and supports different forms of data. The solutions provided are consistent and work with different Business Intelligence (BI) tools as well.Get Started with Hevo for Free
Check out why Hevo is the Best:
- Secure: Hevo has a fault-tolerant architecture that ensures that the data is handled in a secure, consistent manner with zero data loss.
- Schema Management: Hevo takes away the tedious task of schema management & automatically detects the schema of incoming data and maps it to the destination schema.
- Minimal Learning: Hevo, with its simple and interactive UI, is extremely simple for new customers to work on and perform operations.
- Hevo Is Built To Scale: As the number of sources and the volume of your data grows, Hevo scales horizontally, handling millions of records per minute with very little latency.
- Incremental Data Load: Hevo allows the transfer of data that has been modified in real-time. This ensures efficient utilization of bandwidth on both ends.
- Live Support: The Hevo team is available round the clock to extend exceptional support to its customers through chat, email, and support calls.
- Live Monitoring: Hevo allows you to monitor the data flow and check where your data is at a particular point in time.
Implementing Hevo’s Data Pipeline as the Solution
Cure.fit was on a hunt for a modern tool that would simplify their data integration problem. Since one of their verticals, eat.fit stored the majority of data on MongoDB, they were particularly interested in a tool that would simplify the flattening of a nested Mongo event. Most of the products that they evaluated continued to show glitches with MongoDB. Additionally, Data loss issues also cropped up.
“When I saw Hevo, I was amazed by the smoothness with which it worked so many different sources with zero data loss.” – Swati
When Cure.fit settled on Hevo it took them only about two weeks to set the entire system up. Abhishek, Cure.fit’s in-house Solutions Engineer, worked on connecting the sources, writing relevant transformations, and mapping the output to Redshift. Additionally, the team also built custom Data Models on Hevo that reflected on Redshift, ensuring that all business users are accessing the single source of truth.
Hevo nullified the dependency between the Data team and the business teams. Through a front-end reporting tool (Metabase), the teams now build their own reports on data made available by Hevo.
“It was great. All I had to do was do a one-time setup and the pipelines and models worked beautifully. Data was no more the bottleneck” – Abhishek
Benefits of using Hevo Data
“We can now generate over 100 reports daily, Thanks to Hevo. This is a 5X growth from before. Business teams are wowed by the speed and accuracy at which we operate!” – Swati
Today, about 500 business users track their core metrics through the data made available by Hevo. The data team can now generate numerous in-depth reports drilling down on specifics daily. The freed-up bandwidth is being used to focus on bigger analytics projects, warehouse optimization, and more.
“Hevo notifies me over slack and email every time something needs my attention. That is the only time I invest in maintaining this system. The support team at Hevo is amazing. With a turnaround time of fewer than 10 mins, they make it easy for me to fix the problem and move on without having to disturb any workflow” – Abhishek
The blog introduced Cure.fit company and explained the troubles that its employees were facing every day to manage its data. Cure.fit generated a vast amount of data daily and did not have the bandwidth to manage it efficiently. The blog also discussed how Cure.fit implemented Hevo’s Data Pipeline and the instant benefits the companies experienced.
Cure.fit is now one of India’s largest players in the Health, Fitness, and Wellness sector. Hevo is proud to be a part of this journey by helping them realize their data potential.Visit our Website to Explore Hevo
Hevo Data will automate your data transfer process, hence allowing you to focus on other aspects of your business like Analytics, Customer Management, etc. This platform allows you to transfer data from 100+ multiple sources to Cloud-based Data Warehouses like Snowflake, Google BigQuery, Amazon Redshift, etc. It will provide you a hassle-free experience and make your work life much easier.
Want to take Hevo for a spin? Sign Up for a 14-day free trial and experience the feature-rich Hevo suite first hand.