Print This Post
This article is about a very important question today’s innovators face when they develop new software products.
Namely, that of:
“What are the appropriate technical skills, technology stack and architecture for the new product?”
This question is very hard to answer, because very often it is not clear what features a product should have. Needless to say, that sometimes it is not even clear what problem the company is trying to solve for its customers.
Consider a development team that follows an iterative approach to test fundamental hypotheses about the product, the business model and the market.
Initially, during the startup phase, the team will attempt to build, test and adapt an MVP, in an iterative process. With each iteration, the team will improve the MVP and adapt it to reflect the feedback received during the tests.
Once the team verifies their fundamental hypotheses and validates the business model, the startup phase concludes and the scaleup phase starts. If the team disproofs or fails to verify their hypotheses, it will attempt to pivot the strategy to correct the course, formulate new fundamental hypotheses and reiterate.
By the time the team completes the MVP and validates the business model, the product may have gone through several fundamental changes and most likely already has many users. The next step is to scaleup and roll out the product to significantly more users and even enter different markets. At the same time the team should continue operating the product for the existing users.
Following the above approach comes with contradictory expectations regarding the appropriate technical skills, architecture and technology stack. The reasons are:
To put it another way, the technology dilemmas can be described as:
“A startup needs a technology strategy that not only allows for fast delivery of value and supports growth, but can also fulfill unknown, diverse and possibly contradicting requirements”
To put the technology dilemmas in a familiar and tangible context, two examples, which are based on actual situations startups have encountered, will follow.
The first example is about an e-commerce company that operates in the B2C market.
Their product idea is great, but they have to be very fast in gaining a significant market share because competition looms. The team decides to go for a simple architectural pattern and a hosted solution on premise. The reasons for their choice are, that they are very familiar with these technology options and feel that this is the fastest way to build the product. Very soon the first version of the website is ready and the team continuously develops and tests their assumptions about the market.
After a couple of years, the user base in their home country has grown significantly in size and so has the code of the B2C web-based application. It is now time to grow even further, enter new countries and even the B2B2C markets.
Sadly, the team realizes that the application cannot handle the new traffic load and regularly crashes. To make things worse, implementing requirements for other countries and for the B2B2C market turns out to be very difficult. Their web-based application, a monolithic architecture of spaghetti code, cannot be easily scaled or extended.
The team now feels that they should have chosen a different approach to build the web-based application, but they would not have been as fast as it was required in the beginning. Now, their suggestion is to migrate the application to a modern micro-service architecture and to move it to an external IaaS/PaaS.
However, this will take much time and effort which they cannot afford because the competition is catching up.
The second example is about a startup that builds a SaaS product for the industry 4.0 market.
Their business model assumes that several thousands of customers and tens of thousands of users will work concurrently with the product.
The development team is excited and chooses a micro-service architecture for the product. Work begins and the team chooses to develop the product on the IaaS/PaaS of a major provider. The team is not familiar with these technologies. Progress is slow at the beginning, but they learn fast. With each iteration they incrementally improve their MVP. After several months of work, they begin to test with users and potential customers. Everything seems to go great and finally they are making good progress.
Unfortunately, after a while, the team realizes that their target customers are not willing to adopt a SaaS solution yet. There are good reasons to believe, that in the future, customers will accept a solution running on a third party cloud, but for now it is too early. The majority of the potential customers have concerns, because they plan to use the product with intellectual property and are not willing to allow this information to leave their own infrastructure.
The product team decides to pivot. Instead of offering the product as a service to thousands of customers and tens of thousands of users, the decision is to offer it “on premise” to a handful of big customers and later to a few hundred mid-sized customers. Subsequently, every installation must take place on location, as remote access is not always possible. Furthermore, because each installation is customer specific, it will have only a few hundreds of users at most.
The team is now facing a difficult technical challenge, because the requirements have fundamentally changed:
Re-architecting the application is possible, but it will take much effort and the team still needs to continue developing new features for existing customers
Modeling the technology dilemmas can help to better understand their dynamics and develop strategies to tackle the associated challenges.
The value a new product creates, resembles an S-Curve. Initially, as the team tests various hypotheses and builds the MVP, the value creation is relatively flat. Once the right strategy is found and the business model is validated, the value creation potential assumes exponential growth. To capitalize on this potential, the scaleup phase starts. Ultimately, as pressure from competition increases and the market saturates, growth flattens out again. If it was possible to choose the perfect technical skills, technology stack and architecture, the development team would be able to develop the MVP and validate the business model fast during the startup phase. Furthermore, development could smoothly transition to a scaleup phase and the team would deliver work that optimally capitalizes on the exponential growth potential.
In reality, because of the uncertainty about the requirements and pressure to deliver, the technology choices cannot always satisfy the contradicting expectations for fast development needed to quickly validate the strategy and at the same time the ability to scaleup.
Constantly changing the MVP to reflect feedback from the market and pressure to deliver, have most likely led to quick fixes, shortcuts and poor engineering choices. In other words buildup of technical debt. In addition to the above, pivoting may have even set a totally new direction for the product, which may not even be compatible with the current technical solution.
The result of the above situations is, that product development cannot keep up to the demands that derive from the exponential growth potential. The technology debt constrains growth and value creation flattens. The business encounters difficulties scaling-up.
To avoid a premature plateau in growth, the development team will have to perform a technology shift/pivot. What type of pivot is suitable (e.g. re-engineer, migrate or rewrite) depends on the specific situation the startup faces. However, the optimal point in time to perform the pivot is “fixed” and independent of the situation. It is the moment when the team validates the business model i.e. when it becomes evident that creating value has the potential to grow exponentially.
If the pivot is successful, the product will capitalize the exponential growth potential. However, most teams have difficulties taking this decision. Teams hesitate, because the decision is counter intuitive. The reason is, that the pivot will temporarily delay growth. Additionally, it should happen at a moment when growth has, for the first time, demonstrated an exponential trend. Despite appearances, graph  shows that the decision to pivot quickly pays out and outperforms the decision to keep the existing technology choices.
What are common pitfalls when it comes to pivoting the technology, besides failing to recognize the need to pivot?
The first most common mistake is delaying to pivot. Doing so may seem to be a better option than sticking to the current technology choices. Even so, this decision will eventually perform worst when compared to pivoting at the moment the growth potential becomes exponential.
Another danger is that, if the pivot takes too long to complete, it fails to support the growth potential. As a result, the product will miss the chance to scale.
Finally, teams may be tempted to make technology choices optimal for scaling already from the start of development. For example, by choosing a highly scalable but complex architecture or a heavyweight technology stack. They will do that in hope of avoiding to pivot later. By doing so however, they will most likely be very slow in developing the MVP and validating the business model. The graph  shows how such a choice will again lead to sub-optimal growth. Demonstrating feasibility of a scalable technical solution is vital, but implementing it too early is not.
This article, described the technology dilemmas that innovators face and presented a model to explain how the dilemmas affect product development.
The next step would be to answer the question raised at the beginning of the article:
“What are the appropriate technical skills, technology stack and architecture for developing the new product?”
As already mentioned, this is a hard question to answer. Fortunately, the model presented here, offers some insights that can help in developing ideas to answer it.
In a follow-up article, I will write about these ideas and try to suggest a technology strategy for innovation.
One of the challenges product teams encounter, is how to decide which features should be included in their products. Identifying the user needs, helps teams to focus on what a product should deliver to address a certain type of user. In time, teams develop many ideas on how to address these needs. The Kano model (proposed in the 80s by Noriaki Kano) offers a way to differentiate these features by focusing on customer satisfaction. Eventually it can be used to answer the question: “Which features should be included in the Minimum Viable Product (MVP)?”
The Kano model categorizes features into five types based on the impact is has on customer satisfaction:
There is a methodology for mapping customer responses to questionnaires onto his model. When conducting a customer survey, the customer should be asked about his/her opinion on each feature, in a positive and a negative question:
Each question should provide 5 possible answers:
The feature can be mapped to the Kano types based on the chosen answers using the table below:
Looking at the distribution of the answers, it is possible to conclude on the impact a certain feature has on customer satisfaction. If the answers are evenly distributed between two or more Kano types, this could be an indication that the product should be made available into different flavors (e.g. “standard” and “professional”) to better meet the needs of different customer types.
Teams should focus the development efforts on the “Must Have”, “Linear” and “Delighter” features in that order.
The MVP product should be based on the “Must Have” features. However, even though the “Must Have” features are required for the product to survive, they do not have to be fully implemented from the beginning. In many cases a reduced version of the “Must Have” feature may also be adequate. The full feature can be planned for a later release. The “Linear” and “Delighters” should be planned for a later release.
A final important note: The impact a certain feature has on customer satisfaction is not constant and it may decrease with time. For example “Delighters” may degrade to “Linear”and “Must-have” even to “Undesired”.
How can data cubes (multidimensional data models) be used in the context of Business Process Management (BPM) to support management efforts in understanding, planning and improving business processes?
Traditional approaches define a customized multidimensional data model for analyzing operational data.
The proposed approach is to define a standardized multidimensional data model for analyzing data that can be applied to any business process model.
This approach is novel and innovative, not only because of the mapping of business process data to a data cube, but also because, once the mapping has been completed, and facts have been generated (e.g. via simulation), non-technical business analysts are able to have a completely new understanding of the dynamic behavior within organizations by using an extensive range of inquiries that are possible using standard data cube analysis interfaces.
Business process management, simulation and the use of a tested methodology have complementary roles in providing businesses with useful information so that managers can make better decisions.
Business Process Management (BPM) is about the management of business
processes. It is a top down, cross-functional, process centric management approach, which deals with the design, implementation, control and improvement of the end-end process of an organization.
Business process modeling, as a part of Business Process Management, provides a visualization and static analysis component that describes the context in which a set of activities occur.
Simulation provides a dynamic behavior component that is able to test assumptions about all conditions that effect process performance.
The three perspectives (BPM, business process models and simulation) allow the extremely complex interactions and inter-dependencies within organizations to be better understood by managers, so that they can better direct their organizations. All three views are associated with traditional analyses methods that are at best, only partially integrated. Due to lack of integration, and the current cumbersome, highly technical, analysis techniques available, many critical questions cannot be easily answered or they cannot be answered at all.
A new approach is proposed that extends the functionality and value of BPM,
process modeling, simulation and data cube analysis techniques, by combining, transforming and integrating all relevant data into a data cube structure that can be easily created and queried. In the context of business process modeling, a multidimensional model for analyzing the data generated by business process model simulations is developed. A process model, along with the simulation meta model, is used to define the dimensions and the facts for a multidimensional model. The multidimensional model developed is a generic one and is therefore not constrained to a specific kind of a business process model.
When using the this approach, non-technical users are able to analyze a specific business process model and answer critical questions about their organization. The business process model is used to fill the dimensions with data, and data from simulations of the model are used to fill the facts.
The business process cube makes it possible to answer a wide range of mission critical questions regarding the performance of the organization. Questions that can now be answered range from simple ones like “What is the time required to perform Activities per Process?”, or to more complex questions which could not be easily answered before, such as “What is the average throughput time and costs for processing a specific piece of Information for a specific type of customer for each product type between a specific Input and a specific Output?”.