top of page

The Short Order Kitchen Syndrome:

Putting feedback to work (and in perspective)

Image by Ali Yahya
Image by Nathan Shipps

Customer feedback is vital to any product organization.  The most obvious source of customer feedback is from the customer themselves, followed the people who interact with them the most: Sales & Support. It often takes the form of missing  feature requests, changes to existing features and the like. It ranges from “absolute must haves” to “it would be nice if”. It comes to you without prompting, it often has dollar signs and strongly worded emails behind it, and with even a handful of active customers your typical product leader will have a queue full of requests. As a product leader, you ignore this feedback at your peril: it is an essential source of input and useful for prioritization and scoping (among other things). The only worse thing than ignoring it is acting on it without tempering it with other essential sources of feedback.


The Trouble with Field Requests

Let’s take a moment to look at the issues with the type of feedback above, which I’ll call “field requests”:

  • People have a tendency to say one thing and do another. It’s not their fault, they often don’t know what they really want until they actually use it or are in a situation to choose or are asked to reach for their wallet. Take the famous yellow Sony Walkman example. Solving this is part of the foundation of lean product development-- build an MVP and gauge actual usage rather than what people tell you. The issue is human nature itself and a characteristic lack of foresight-- customers are busy and typically don’t invest their energy into thinking about how a product they use should progress. Even if the customer does consider what they need in the future, they may not be thinking of the actual organizational dynamics if they even had what they requested-- it would be really nice to be able to push a “fix” button, but does the team have the authority to do so?


  • Customers will make requests within the existing, envisioned scope of your product and not about the “job” they are trying to do. Unless you know how to ask for it, usually requiring at least basic research skills, you’re only going to receive requests that are bounded by what they think your product should do. A good example for Tenable is no customer is likely to ask you to normalize all of their vulnerability and asset data-- they assume that is a job for any of a number of startups, SIEMs or Splunk. But if you explored their pain points, you would find that getting a clear, single picture of their vulns and assets is likely vastly more important than the feature they just asked you for. It would take an especially enlightened customer to connect all the dots for you so that the small requests automatically reveal the bigger picture (and the bigger win). Understandably, this is the responsibility of the product owner and why synthesis of research and a variety of different types of feedback is so important. Without it, you might end up sweeping the floors while you really should be remodeling the entire room.


  • Responsiveness often leads to feature sprawl, not innovation. This is not always true, but if you are very responsive to field requests and skip the step of synthesizing it with other data and a real long-term direction for your product, you end up with a bunch of features that roughly do the same thing… and ultimately a confusing, cluttered user experience. Even worse, you set yourself up to be leapfrogged by the competitor who solves tomorrow’s acute pain points while you chase the dull aches the customer can recall.  This is the heart of the “short order kitchen syndrome”-- you can do a lot of things (sandwiches, pancakes and burritos) but none of them particularly well. And no one will pay you much for it.


  • Customers often have unique needs, or edge cases. I’ve seen entire product lines crumble chasing the specific needs of a few, very large customers. It’s the death knell for early startups as well-- the product team responds expertly to the needs of a small segment of the customer base while missing the broader, more lucrative market. This is common in enterprise-focused product businesses where a few marquis customers are doted on top execs who demand their needs are met while they implicitly ignore the capabilities that should be created to develop a true solution of the target market. Common casualties of such an approach are diagnostics, stability, and ease of integration. Edge cases should be carefully considered-- while you may not choose to pursue them (a polite “no”) they are often indicators of where you need to take your APIs so that the customer can self-service or might even be an indicator of an adjacency you should consider in the future.


  • Field requests that originate from Sales introduce noise to the signal. They are not unbiased as the person providing the feedback has a direct financial incentive involved (e.g., they want the prospect to purchase so they can earn a commission). While this is the bias that springs to mind most readily, there are a cadre of other biases potentially at play: confirmation bias, selection bias, recency bias, loudest bias and so on. None of this renders the requests meaningless, they only require more attention to determine the genuine level of need, importance to the customer. The request for something you don’t have could be a negotiating ploy to drive down the price or extract some other concession from the vendor. Assume good intent, but as the saying goes, “In God we trust, all else we test.”


Field requests are made independent of any consideration of costs, legal requirements, platform needs, etc.-- they are limited in scope. This doesn’t give a product org the excuse to ignore them-- the field deserves a process where they can be heard and receive a response to requests. I’ll come back to how field requests can be effectively harnessed in the final section.


Beyond Field Requests - The Rest of the Puzzle


The product team’s job is to synthesize *all* of the customer feedback, not just the qualitative requests that come from the field. What other feedback is there?


User Research

Sitting down with customers and observing them work can be one of the most effective ways of truly understanding how people are doing their jobs and where your product can improve. It is especially useful when you are aiming at disrupting or entering a new category as it allows you to readily view what else the customer is doing and break out of the confining scope of what they are doing with your product alone. You cannot reinvent a category if you don’t clearly understand how customers are working today. The best way of grasping this is to sit down and experience it with the people who are living it.


There are plenty of ways of doing customer research but the gist is the same-- you come well armed with clear questions you’re trying to answer, hypotheses and a direction in mind. And then you shower in the cold water of real customer feedback and see what happens to all your previous ideas.


Usage Telemetry

Investing in knowing what customers are using (and how) is as straightforward as it is important. It has to be weighed against other product investments and deemed a priority. The earlier you do it, the less expensive it is. This is why we made it a priority for from launch at the expense of feature development. Without basic usage data, how will you know what features need to be improved or dropped? Where customers are focusing their time? How will you spot where customers are getting lost in the UI? The list goes on, but getting a grip on user engagement requires at least basic telemetry-- letting alone optimizing a SaaS offering where online customer acquisition, conversion and retention are essential. In these cases, telemetry is a no-brainer. Nonetheless, it’s applicable in nearly every product and should be examined on both a feature basis (part of normal routine for the feature team) and as a whole periodically (e.g., monthly trends briefing and discussion).


Quantitative Satisfaction Scoring

There are many ways to score customer satisfaction, but regardless of the method having a quantitative measurement of satisfaction offers insight that you cannot readily gain otherwise. Tools such as NetPromoter Score (NPS) provide customers an anonymous means of telling you how you’re doing and in a method that is likely more thoughtful than other means of gathering feedback. For example, support satisfaction ratings have more do with the transaction they just completed than their broader experience with the product. Retention rates are critically important, but a trailing indicator of satisfaction and can be affected by budgets, political events and many things that are completely out of the control of the product team.


NPS and similar methods of quant-based scoring can provide a semi-annual check-in to gauge progress on how you’re doing. Do you have the right blend of features, platform health (experienced as stability), investments in quality, and so on? While this method of gathering customer feedback won’t often give you specific insights into a feature area or tell you when your category itself sucks, it does tell you in a proactive fashion how you’re doing in a way that can be compared both inside and outside the company.  And that is incredibly useful.




Focus Groups

There are all sorts valid critiques of focus groups and while they are commonplace with consumer products, they are rarely used in their canonical format with enterprise products. Instead, enterprise product companies that are on top of their game establish Customer Advisory Boards (CABs) as a form of pseudo focus group. CABs in my experience are invaluable. They are usually composed of 15-20 people from forward-thinking orgs that are not so hands on that they are myopic but also not to removed to not understand how things work operationally. The goldilocks spot is often the Director of something.


If you steer a CAB correctly, they can give you clear guidance on feature priorities and let you know who to go back to when it’s time to sign up customer sponsors for new initiatives (e.g., features). They can give you honest feedback on a wide variety of topics from mockups to messaging. Further, they can give you ideas about new use cases or aspects of the customer jobs that you (and the rest of the market) may be missing. Beyond product and category insights, they build a core group of customers who believe in your brand and feel at least a little invested in it. They don’t hurt NPS either.


Persona Broadcasts

I’m not sure about the name for this type of input, but it is the practice effectively putting a real face and voice to your customer personas. Sort of like they did in this cancer study where diagnosis accuracy improved when doctors had a photo of the patient attached to the x-ray. The premise is simple and potent-- if you can build empathy for the customer, your team will care more and generally do a better job than if the customer is unknown, anonymous.


There’s a few ways to do it, when I was at Norton we did a combination of things from archetypes for each segment with that personas’ name, behaviors, interests and so on. I personally found this a little contrived but I’m sure it worked for some people. What I’ve found more successful is real customer interviews conducted by someone who knows how to do it… or has a strong template to guide them. A one hour broadcast interview with the ability for interactive question and answer from the team  is the sweet spot in my experience. Record the session and it becomes an invaluable onboarding tool-- especially for new hires who are unaccustomed to the industry or product space. A substitute for this are secondary research from organizations like Gartner Group, IDC or Frost and Sullivan. To quote the great Marvin Gaye, “ain’t nothing like the real thing.”


Putting it All Together (& Revisiting Field Requests)

So the job of the product lead is to take all of this input-- alongside everything else happening inside the business alongside relevant external factors  and arrive at the optimal mix of investments in product initiatives (our vehicle for getting medium to large projects done). It is hard. It is supposed to be.  In a previous job I had a contentious relationship with a colleague in Finance who was exasperated that I did not have a spreadsheet like his former product leader who had devised her system down to a clear formula as to what to do when. If you are using all the data you should, such a formula would be impossible to create… and a bit silly. As it is in any other function, the idea here is to gather a wide variety of data, perform thorough analysis and then use your faculties (and some well-informed friends) to make a decision. Some decisions will be easy, many will be gut-wrenching. This is especially true with a new product or platform where there’s a massive amount of input to consider.


I’ll close out with a final word on field requests. They can be used for a large number of purposes, but at the start of the process where we choose among concepts, I find them useful for supporting decisions among features that you already know to be of interest (i.e., to determine which is a higher priority). Later in the Design phase, field requests are also especially good at identifying customer sponsors for initiatives you already know you are taking on. The customer sponsors then, within the scope of an interview for the feature in question, provide well-timed and essential guidance on scope of the MVP, how appealing the design is, and so on. And since empathy is critical, these interviews are to be conducted with the entire design team present: engineering, UX and PM.  When we’re in the delivery and deployment phase, the customer sponsors once again provide essential feedback on what’s been delivered.


What does it look like in practice?


I’ll provide an over-simplified by functional example that most product leaders can relate to: role-based access control (RBAC). At the outset, you hear genuine use cases from customers (field requests) that you’re not supporting today, such as the ability for a company to restrict access to a certain group of assets or for a team to run completely independently of the rest of the org but still have their results appear as part of the overall org results, including trends and scoring. You then engage your CAB, which is more important, the ability to restrict access by team or by assets? Or by privilege type (i.e., CRUD)?  With this feedback in hand, you review requests from Support and the Field while sparing a glance at what the competition has done.  


Following this, the core of an initiative concept it ready, perhaps focused on asset based RBAC at the outset. Major decisions are then fleshed out in the design phase where customer sponsor interviews seeded by the CAB are complemented with likely a few customers identified via field requests and the scope of the initial MVP along with the likely subsequent phases are plainly laid out (e.g., asset-based then nesting then more granular privileges per user role). Mock-ups are reviewed with the sponsors and the team has a strong sense that they have the right design. The initiative can then be greenlighted for delivery where the team builds out the initial phase. When the customer preview release is ready, the sponsors are the first users (e.g., harnessing feature flags) and their telemetry data is watched in earnest to how they are actually using the new capabilities. Learnings from telemetry and verbal feedback form the basis of the acceptance criteria alongside sales engineer, Support, and Professional Services guidance. In aggregate, you gain a clear picture of whether you have more work to do or if it’s time to ship and smile.


After you deploy, you monitor feedback while readying the next phase. Field feedback and telemetry are essential during this time frame to know when to start the phase 2 versus investing in handling the “tail” of bug and minor enhancement requests.


Ultimately, we demonstrate our complete respect for our customers not by acting on their direct requests, but more so by pursuing and thoughtfully considering the full breadth of data we can obtain, weighing the implications, and then making our choices. The information will never be perfect-- the best product teams season feedback with great instincts and creative problem-solving. It’s the difference between a wonderfully conceived meal created by a master chef (Omakase!) and a short-order kitchen who cranks out whatever you ask for with little fervor and less taste.

Image by Hans-Peter Gauster
bottom of page