About the Author
Ali is the creator of Product Analytics Academy, an online school that provides high-quality product analytics courses. He's also the founder of Mentat Analytics, a top-rated analytics consulting agency. He previously worked on the data teams at Bird and Buzzfeed.
The Rise of Product-Led Growth
Product-Lead Growth (PLG) – a fundamentally different approach to scale compared to the traditional growth strategies – is now a winning GTM strategy in the SaaS space.
An innovative GTM strategy engineered around product value and user experience, it’s worth reminiscing the factors that led to the rise of Product-Led Growth (PLG).
The traditional growth approach relied substantially on sales and marketing muscle and the product was often viewed as an end commodity for their customers. While the reliance on sales and marketing worked – and continues to do so in many cases – in the past decade or so, with product intelligence, new paradigm has emerged, where the product itself is the primary driver for growth.
Two of the most well-known examples of this are Slack and Calendly.
Both Slack and Calendly had a meteoric rise driven primarily by the product experience itself. With Calendly, the most basic usage of the product – sending a link to schedule meetings – acts as marketing for the product.
Whenever Calendly users send a meeting link, they actively show the recipients how the product works and what its benefits are. These secondary users get to experience Calendly and its benefits for free, without ever creating an account. They also receive social validation about Calendly’s value when they see others in their network using the tool.
In Slack’s case, the fundamental value proposition is “an easy way to collaborate asynchronously”, prompting users to invite others in their network into the product.
For both Slack and Calendly, viral growth isn’t just the result of typical “growth hacks” like referral systems, social media sharing, etc. but rather the ease of use and quick time-to-see-value.
PLG is much faster and less resource-intensive than relying solely on sales and marketing teams, given the fact that the same teams used for building the product – product managers, engineering, designers, etc. – are the same teams that power growth.
It’s worth keeping in mind that in many cases the product is not the sole driver of growth but the primary one. Slack for example still utilizes strong sales and marketing teams alongside its product to amplify its main growth engine.
Product Analytics is the Lifeblood Of PLG
A commonly shared aspect of businesses that have successfully utilized the PLG method is their free plan offerings. As they rely on users to help grow the product by recruiting those in their network, they also need to give users a way to experience the core functions of the product without committing to a financial transaction.
In Calendly’s case, you get to use the planning function with the free tier but ‘pro’ functionalities like automation and access to multiple calendar scheduling are behind a paywall. In Slack’s case, you are limited in how long your message history lasts and features like Slack Connect. Hevo, a No-code ELT platform, offers a free plan for replicating 1 million events from 50+ free connectors.
In all these cases, the core value proposition is available indefinitely to the users, all for free. More usage leads to more users.
The existence of the strong free tier shifts the importance of the user’s validation of the product from pre-usage (typically accomplished through advertising) to post-usage. We now care less about the initial sign-up and more about using that initial signup to convert the user to a retained, happy customer.
If they stay longer, not only are they more likely to convert to paid users, but also influence others to join the service. The focus that we previously placed on optimizing user acquisition funnels now needs to extend to our engagement and retention funnels. We must now care not just about the clickthrough rates that brought a user to the product but also about the user experience that keeps them around.
With PLG, optimization becomes a strong focus of the product team. Identifying and fixing a single instance of poor UX can have the same impact as drastically increasing your advertising budget. Mapping your users’ journey through your product is no longer a luxury but a necessity.
Acquired users aren’t handed off to the product team, they are gained by the product team. Understanding how your product is used is the unavoidable first step to this optimization process, and there is only one way to accomplish this step: implement robust product analytics.
The Need for Both Business and Product Intelligence
One of the most common points of confusion for those who are new to this space is the difference between product intelligence and business intelligence, and why there’s a need for both. This is a very valid source of confusion; after all, if the product is the core of the business why would it not simply fall under the same category?
The answer is simple: While there is significant overlap between product and business, the needs in each category are different enough, and the importance of product understanding is high enough that it’s best to provide dedicated resources and tools for better product intelligence.
Let’s expand on why the needs between the two are different. To begin, the specific data points used in each topic can differ fundamentally in their structure. Product data mostly involves information about actions taken by the user, and how the product responds to them.
These actions only make sense when examined in context to one another, like when we see that a user is more likely to complete a purchase if they’ve redeemed a discount code earlier in the flow. Because of this, product data needs to be timestamped. This type of timestamped data point indicating an action or occurrence is typically referred to as an “event”.
Another type of context needed for product data is user information. We want to know more about the person who took specific action in order to understand that action better.
For example, do people in one market purchase a specific offering more than the other? Maybe our offerings need to be re-ordered based on the market the user is based in. So we need to supplement this event data with user data that in many cases is pulled from other data sources such as CRMs, Customer Engagement Platforms, and more. We’ll talk about how to provide this supplementary user data in the upcoming sections.
Another major difference between product intelligence and business intelligence is in their respective stakeholders. The former is used much more heavily by the product team – and specifically product managers – than the latter.
Product teams need to analyze their data with limited technical skills like knowledge of basic scripting or querying languages. This necessitates the existence of tools that let the user dig deep into their data using a user interface alone.
Mixpanel is one of the most popular tools in this category, allowing its users to build funnels, cohorts, and many different types of reports all via its UI and without the need to write any code. Product analytics tools like Mixpanel make the process of analysis much easier and more intuitive, though there is still a need for product managers and other product roles to be data-savvy in other ways.
They need a strong knowledge of product metrics, data capture methods, and how to use these tools to make good decisions. This is one reason we built out Product Analytics Academy, which provides a set of self-paced data courses for product managers covering all of these topics.
Ultimately the best approach to addressing the unique intelligence needs of a PLG motion is to have dedicated tools for product analytics that are deliberately woven into the larger data stack that powers your business intelligence. Now let’s talk about the best way to build such a data stack.
Integrating Product Analytics Tools With Modern Data Stack
We’ve set out a lot of goals for our data stack with regard to our product intelligence needs, but how do we accomplish them all? Let’s start by recapping our needs:
- Keeping a low technical barrier for product analysis
- Capturing time-stamped product usage data (”events”)
- Supplementing event data with user data pulled from other data sources
Many other tools generate insights from product usage data, each providing some additional information about the user. One example is Marketo, a customer engagement tool used for sending emails, push notifications, and other forms of communication to the user. Many of these interactions are triggered by specific user actions, and hence event data is the input to these tools.
However, these tools also capture information about how users interact with these communications, which then act as an input to trigger further interactions with the user. An example is whether a user saw an email containing a promo before they went on to make a purchase.
The second example of tools with useful data is a CRM like Salesforce. A client’s Salesforce profile can have information like their company size, their contract duration, their amount of funding, and more. A company with more budget and less time left in their contract is one that you would want to prioritize proactive communication with, given the outsized impact if they renewed your product.
Marketo and Salesforce are examples of tools that provide supplementary data that we would want to pipe into our product analytics tools in order to improve our understanding of user behavior.
The Modern Data Stack approach to analytics is to use the data warehouse as the single source of truth. Data from every source – like Mixpanel, Marketo, Salesforce and application database – is replicated in the warehouse first. This data is then processed and transformed in order to generate actionable insights. Processed data is then piped back into either Business Intelligence tools for reporting or Business Applications to operate better.
Replicating data from various product analytics tools and databases requires setting up data pipelines that can move data from multiple sources to the warehouse. But this can often be a big challenge, given the number of pipelines to create and maintain.
Hevo – No-code ELT platform – reliably replicates data from over 150+ pre-built connectors – like SaaS applications, user analytics applications, and databases – into your preferred warehouse.
The final piece of this data stack is the tool used for transforming the data once it’s in the warehouse. Like every other category, there are several options to pick from here, but the one that seems most preferred by users is DBT.
DBT allows you to use SQL and Jinja templating to easily build new tables and schedule jobs for data processing. Additionally, DBT integrates natively with Hevo so that you can connect your data pipeline directly to your transformation tool.
This way, all data that comes in can immediately undergo the transformation process and become ready to use. Finally, add a business intelligence tool like Metabase or Looker to allow data analysts easy access to all the data in your warehouse.
Putting It All Together: A Case Study
Now that we know how to use and connect all of the different components of our stack, let’s run through a fictional case study where we build our ideal data stack combining the best of product intelligence and business intelligence.
Let’s say we’re in charge of building the data stack at GroupGrub, an app that lets its users place grouped food orders so they can save on delivery and preparation costs. We want our users to invite others to their group order, so we need to make the free offering as easy to use as possible.
There’s no better way to do this than by optimizing our product through robust analytics. The first requirement of our stack is a product analytics tool like Mixpanel. Marketo is used to run promos and notify users of these promos.
It’s important for us to know how the open rates of these promo communications correspond to order completions. We also keep track of our food providers through Salesforce. In our Salesforce profile for each restaurant, we want to know how often users view their menu but don’t complete an order, since this can be a strong sign of lack of appeal in their offerings.
Our product is a pair of mobile apps (iOS and Android) built on a PostgreSQL backend on Amazon Aurora. We want to stick with AWS, so we use Redshift – Amazon’s data warehouse offering – as our warehouse. To transform the data within our warehouse, we use DBT.
With all that in mind, we can summarize our requirements as follows:
- Mobile app usage data captured by Mixpanel and easy to analyze through their reporting features
- Communications interaction data captured by Marketo and used to supplement Mixpanel user data
- Mixpanel data on conversion rates within the app shared with Salesforce to give insights on different food providers
- A data stack built in AWS, with the ability to process the data through a data transformation tool
Here’s what a stack like this will look like:
We’ve checked all the boxes with our data stack. The technical barrier to product analysis is kept low so that product managers can easily get their questions answered.
User actions within the product are given appropriate context since they are supplemented with data from our communication tools. Our partnerships team knows which food providers have the highest appeal to their customers because they receive data about app usage. And most importantly, we’re able to optimize our product – in particular its free tier – so we can take a Product-Led Growth approach to our business.
We did all of this by providing dedicated tools for each type of intelligence our business needs, yet keeping them all connected with a reliable modern data stack.