Raw data is good. However, for many people, it’s just not that helpful, and it’s definitely not scalable. Sure, for those brilliant data analysts, raw data is awesome. In fact, data analysts prefer the data to be as raw as possible!
However, when you are trying to leverage data in order to drive decisions and actions, it needs to be easier to consume. This is especially true when you are trying to scale the decisions and actions that can be derived from this data across a wide group of people.
Let’s take credit data as an easy example. The average adult’s credit history includes mounds of raw data, all of it super valuable. But without any context, how useful is it?
Raw credit data looks a little something like this:
January 2010: Opened credit card
May 2010: Car loan issued — principal $13,500
June 2011: Late payment on car loan
July 2012: Applied for mortgage
August 2012: Late payment on car loan
Feb 2013: Opened credit card
Feb 2014: Credit card limit extended
May 2014: Car loan balance paid in full
May 2014: Mortgage issued — principal $159,400
July 2014: Credit card balance paid in full
Dec 2015: 90-day delinquency on mortgage payment
Based on this raw credit data alone, would you be able to tell if this individual’s credit history is good or bad? If you were a bank, would you lend to this person? How would you make that decision?
If a creditor had to weed through all of this raw data every time they were assessing a loan, well, there wouldn’t be many loans issued. Not only would take an enormous amount of time to weed through this type of data, but it would also be near impossible to leverage for decision-making purposes.
By just looking at this data on its own, some might think, “Three late payments over 5 years? That’s terrible! Reject him the bum!”
But what if told you that the average potential borrower had six late payments over a typical five year period? Hm….maybe this individual’s credit history isn’t looking so bad.
However, what if I then told you that 8% of the population had zero late payments over five years? Oh…interesting. So this guy’s not terrible, but not perfect, either.
Now, what if I told you that this borrower was in the bottom 25% of credit holders in terms of length of credit history? Well…jeez.
The point is, this raw data is really meaningless without the right context. No lender could ever be expected to make a good, fast decision based upon raw data alone.
Enter credit scores.
Credit scoring models take in all of this raw data and spit out a contextual number that is actually useful. They weigh certain factors (like payment history) more than other factors (like total amount owned), crunch the numbers, and churn out a single number. And with a scoring model, it all becomes pretty simple to determine that someone with a 750 credit score is a better borrower than someone with a 670 credit score.
The best part is, it’s scalable. Any type of creditor — mortgage lenders, car companies, rental agencies, etc. — can use this same number to drive their decisions.
In short, it’s helpful, actionable, and scalable. Because it has context.
One of the best ways to add valuable context to your existing product metrics is by creating a way to score your product engagement. Like with credit scoring, product engagement scoring gives your product data the essential context necessary for every part of your organization that relies on it.
Ultimately, this contextual product data is essential for:
With context, this type of data can—and should—become a foundation for much of how you conduct your business.
In Sherlock, users create their own scoring model by weighing their important product events based on each event’s importance to overall engagement, much like a credit score. With this simple configuration, Sherlock builds a model that gives every user of the product a normalized score between 1–100.
As you can imagine, these normalized scores make your product engagement data helpful. They allow you to understand engagement like never before. Sherlock leverages these user-level scores to score and rank:
All of this data can be easily consumed because it’s all contextual. More importantly, this scoring model enables product data to be much more useful and actionable across your entire organization.
Don’t settle for weeding through a bunch of raw data to derive meaning. And certainly, don’t make the rest of your organization wrestle with your raw data in order to make their functional decisions.
Whether or not you use Sherlock to help with this, you should definitely be looking to give your product data more context. To help you get started on your own, we’ve created a step-by-step blog post on how to track product engagement.
The fact is, SaaS organizations that make decisions and take actions based on actual user data simply operate at a higher level. As a product leader, you should be striving to make this as easy as possible for the team. Facilitate the transformation of your product metrics and get the entire organization to the next level.
A version of this post was originally shared on ProductCoalition.com.
SaaS customers won’t be successful with a SaaS product if they don’t use it. Sure, some might continue to pay for it (seriously, my dad’s still paying for AOL), but that’s not the same as success. And you want people to be successful with your SaaS product (retention revenue, anyone?)
Enter your customer success team — their job is to maximize engagement with the product. Account not engaging? Fix it. Account engaging? Excellent, increase engagement even more.
But CS teams can’t work blindly (or they shouldn’t if you want them to be successful). So how do you remove the blindfold? It’s elementary, my friend: Product engagement — the system upon which all other customer success metrics are built. Metrics based on product engagement give your team a glimpse into the health of the business. And they’re actionable (CS teams need to take action).
Here are the key customer success metrics you should be tracking:
active user (n.) = someone who has used the product in a given time frame (even if only a little!)
active account (n.) = an account that has had at least one user use the product in a given time frame (even if only a little!)
This one’s simplicity itself. You choose what counts as activity. Login, perhaps? Or, even better, actions around other key features (sometimes“login” isn’t the best indicator of engagement).
Then choose how often to measure (this will depend on your product). Used daily (think Facebook)? Measure DAUs. Used monthly? Measure MAUs. Most SaaS products are business apps with weekly usage patterns so a WAU customer success metric works best.
Here’s a pro tip: SaaS businesses are account-based businesses and your customer success team operates at an account-level. If you can’t measure your key product engagement metrics at the account level, then they’re not helpful. Action item: Make sure you can determine“active” for users as well as for accounts.
Generally speaking, “active” is a pretty shallow metric. To get a better understanding of how well your customers are engaging, you need to look deeper.
product engagement (n.) = a measure of how engaged your users or accounts are in a given time frame
This is this singular customer success metric for any SaaS business. You already know “active” doesn’t mean “engaged” (neither does “last login”). Product engagement is more than that. Just ask Lincoln Murphy:
Logins Don’t Matter
While you generally need to sign-in to an app to get value, that action alone is probably not the thing that delivers value to your customer.
Simply being “active” in the product doesn’t mean you’re being “successful” either.
In fact, a lot of logins and random in-app activity could be a sign that your customer can’t figure out what to do…but they sure would like to.
It’s a signal that something’s amiss…but a lot of companies might wrongly classify that customer as “active” and therefore“onboard” and“successful.”
“Active” – logins, random in-app activity, etc. – without context could be that the user wants to do something but can’t figure it out, so “active” in that case is actually a churn threat. Crazy.
Lincoln Murphy, “Active Users Are a Vanity Metric“
Crazy, indeed. You need to score actual product engagement. It’s a model that scores each of your users based on (a) the number of times they do certain things in your product; and (b) the importance of those activities.
Here’s what a table to score user engagement might look like:
By doing this you will give every user his/her own engagement score (you should then normalize the scores between 1-100 so they are easier to understand and use) for your engagement period (daily, weekly, monthly — up to you).
Any system you build (or use) for measuring engagement should help you measure engagement at the user, account, and product level. And it should track this engagement over time. If you want to get value out of it, anyway. You do want to get value, don’t you?
user adoption rate (n.) = the percentage of your key features a user has used in a given timeframe
account adoption rate (n.) = the percentage of your key features any user from an account has used in a given timeframe
Measuring Adoption is similar to measuring engagement, but Adoption determines “what percentage of your key features” your customers are using. While users can show strong engagement by only using a small set of features, Adoption rate measures the number of unique features being used. This basically measures “depth of engagement.”
For example, if you have ten important features and one of your accounts has used two of them in the past month, that account would have a 30-day Adoption rate of 20%. But another account whose users used eight of those features would have a 30-day Adoption rate of 80%.
Low Adoption rates mean that users are using the product in a concentrated way, while high Adoption rates mean that users are using the product more broadly.
activation rate (n.) = How far along a user/account is on the path of becoming Activated, i.e. fully on-boarded and/or to first-value
Knowing how close (or far away) your accounts are to the point of Activation is an essential customer success metric for any team working to onboard new users.
An Activation rate is just that. It’s a measure of what percentage of your “Activation” criteria an account has met — how many steps have they taken as a percentage of the total number of steps they need to take. (This is especially important in trial accounts.)
For example, if you offer a project management application, new (trial) accounts might be Activated after they:
For this product, accounts that do all of these things would be considered Activated during their trial. The Activation rate would be expressed as a percentage based on how many of these steps a new account has taken.
You’ve got the what, now let’s figure out the how to (assess and derive action from) of our core customer success metrics. To do that, we need to get them actionable at both a management and a tactical level (as with any other operational metric).
Management-level metrics tell you about the health of some part of your business. A management-level customer success metric tells you about the health of your paid user base (or specific segments of it).
Tactical-level metrics are action metrics. They tell your team that something needs to happen (or not happen). A tactical-level customer success metric is all about helping your customer success managers identify issues and opportunities so they can take action. Baker Street Insight: These metrics really need to be measured at both an account and user level.
Excellent! Let’s see how to use the key customer success metrics we defined above.
This one’s inherently more tactical. And why? It doesn’t do much to tell you about the health of your business other than painting a picture of (a) the general scope of your user base and (b) it’s growth (or decline).
Knowing if you have 200 active accounts (or if the number of active accounts is different from last month) can help you plan resources at a management level. And if you don’t have a product engagement scoring solution, you could use growth in active users/accounts as a proxy for the health of your product. But you really shouldn’t (see above).
For tactics, on the other hand, knowing how many active users there are on a specific account is very helpful for a CSM managing an account. Most SaaS businesses can only be successful for their customers if the product is used by multiple users on multiple teams. So when managing SaaS accounts, knowing the number (or percentage of active users) can help a CSM identify problem or thriving accounts and prioritize their work.
This is a power metric at both levels.
As a management metric, tracking the engagement level of paid accounts over time tells you the overall health of your userbase. Are our paid users becoming more or less engaged with our product? This is a basic question that all Customer Success teams should be able to answer.
It also is a customer success metric perfect for assessing the health of different segments of your customer base. For example, looking at the engagement of accounts at different pricing plans or in different industries can help you hone your ideal customer profile (product-market fit, anyone?). Interesting use case: Looking at the engagement of accounts by individual CSM is one way to assess CSM performance.
As a tactical customer success metric, looking at product engagement score at the account level can uncover:
At the user level, this metric can help you identify power and problem users. Power users (or power users for each account), my friend, are going to be key to the health and potential expansion of an account. Problem users (those whose engagement is decreasing over time) can quickly become the canary in the coal mine for any account. Use them to gain information.
Let’s get real here: One of the main jobs of your customer success team (and one of the hardest) is getting users to adopt as many of features as possible. Healthy businesses have healthy Adoption Rates and looking at an overall Adoption rate for your paid users can help determine whether or not this is happening.
As with Engagement Score, looking at Adoption rates across different segments can also be helpful. You may have accounts that are on self-serve plans vs high-touch service plans — looking at Adoption rates across these two segments can help identify if your high-touch efforts are helping. (The same is true for looking at Adoption across different pricing tiers.) Looking at Adoption rates across various segments can offer insights that can help drive higher-level customer success strategies.
As a tactical customer success metric, it’s important for you customer success teams to know the Adoption rate of their accounts so they can effectively target accounts with specific and relevant support. An Adoption rate metric at the account level can help a CSM understand each account’s use case and and give them the information they need for futher Adoption.
Activation rate is a customer success metric targeted at your new accounts and users — and it’s an important one. Looking at Activation rates at a management level can help you understand the success of your on-boarding efforts.
But when you track Activation rate at the account level, it becomes a powerful tactical customer success metric. Every customer success team is responsible for on-boarding users and accounts, so having the ability to track the degree to which each account is on-boarded is essential for prioritizing and organizing their work. Having this data can help a CSM focus on those accounts that are stuck in the process and understand exactly what the sticking points are.
Quite so! Operational metrics give you insights into the health of your business, but when used correctly, they are also powerful tools that help you define specific actions. The former management-level use case dovetails nicely with the latter tactical-level use case.
This is especially true when it comes to metrics for your Customer Success team. Without customer success metrics that can inform and drive action, your team will be spending a lot of time guessing and chasing after red herrings. Not excellent.