- February, 2012 ( 1 )
- November, 2011 ( 1 )
- June, 2011 ( 1 )
- May, 2011 ( 1 )
- April, 2011 ( 1 )
- March, 2011 ( 1 )
- January, 2011 ( 1 )
- December, 2010 ( 1 )
- November, 2010 ( 3 )
- September, 2010 ( 1 )
- June, 2010 ( 2 )
- April, 2010 ( 1 )
- March, 2010 ( 2 )
- January, 2010 ( 4 )
- December, 2009 ( 2 )
If you are looking for my blog on SMPS design, see the Publications tab on this website. There are links to EEWeb. Eventually I will turn the raw blog into a few articles that are more suitble for general reading. The dialog on the blog is found in the Power Mangement Professions Linked In Group.
I just read that Roger Boisjoly died. He is the engineer that tried to stop the Challenger Launch. While I am sympathetic to those in a race to the bottom due to competition, taking short cuts, rationalizing, and ignoring risk has consequences. Read the full story here.
If your a product manager beware of delivering late! You will end up in a hotel in Chengdu, like the Green Westin Hotel, where I am relaxing after 7 days of long hours. A software release was late, and my customer had production on hold waiting for it. As CEO of the product, you are the one ultimately accountable for execution.
We delivered, and now the customer wants us to stay longer and remain on call in case there are problems, train their employees, etc. Yet I must move on to the next release and leave it to others to manage support.
A lot of people are making noise about the Foxconn explosion. Responses to the article range from business as usual to a slave operation with nets to catch runaway indentured servants trapped in company housing and chained to an assembly line. Foxconn does final assembly of iPads, etc. I presume final assembly is done in China because of low cost skilled labor. Some speak as if Foxconn is the only company involved in building iPads, however I believe I saw a breakdown that determined about 10% of the value is added in China. The semiconductor supply chain required to service the iPad is global. I know many of the power supply components came from a Semiconductor manufacturer I worked at in the US.
I have been hanging out at one of the contract package and assembly houses in Chengdu and conditions there are similar to Silicon Valley in the 80's. It is clean. Workers are generally happy. They have a nice cafeteria, with food way better than anything in the US, and get a short nap. Yes, they make a lot less than Americans. But my lunch was only $1.39 USD. If they are slaves, they are quite happy ones that have homes and families and go shopping on the weekend. Sound familiar?
There is still some construction in the business parks and I saw some empty lots. I have been told that Chengdu is becoming China's technology center. I am not sure why, because it is a log way from any port.
Housing is dense, much like Shanghai. These apartments are quite nice inside. Tile floors, nice furniture. They are small by American Standards, but then people are not having large families.
The air quality is a bit sketch as my kids say. The locals pointed out that Chengdu is surrounded by mountains and most of this is moisture. An expat I ran into this morning said that is what the locals like to say. My eyes don't burn or anything. I am guessing it is a mix. Shanghai air looks kind of the same. Foggy like. Not too cold, not too hot, just right.
Getting here was not so easy. On the 3 hour trip from Shanghai the plane turned around and landed on some closed airstrip 1/3 of the way back to Shanghai in god knows where. We sat on the runway while a big storm cleared in Chengdu. The pilot said there was wind shear. Bt the time the flight arrived in Chengdu, instead of 10PM, it was 5AM. The taxi line had a few hundred people in it. My private ride was long gone. Being the engineer, I calculated it would take 3 hours to get a taxi, and I have been awake for over 30 hours and had to work that day. I discovered you can go upstairs where the taxis drop off. You wait for one to stop, and when the passengers get out, you jump in fast before he goes down a level to the stand. He gets a customer faster, and you get the hell out of Dodge. I left the taxi line for those thinking inside the box.
If you are a Product Manager you get to hang out with pretty girls and wear nice hats, but don't be deceived. This was taken early in the day on the first shift. I was not so fresh by the middle of the second shift. CEO of a product can be a tough job. Eight hours of sleeping, sixteen hours of work. Seven days a week. The blue jacket is a nice perk. Actually, there are several colors, like blue, white, yellow and pink. Blue is for engineers, white for operators, and pink for QA, etc.
We ani't in Kansas any more boys. The nearest Starbucks is 1 hour away in the downtown area. The coffee at the hotel is boiled. That's right, boiled coffee. Very strong, but keeps you alert! Like India, the trick is to eat what the locals eat. Don't eat anything that pretends to be American. McDonalds is better than in the US, but the local food is much better. You have to be hungry and in a hurry to drive through McDonalds when you have such great food available.
Are you wanting to be CEO of a product yet?
Whenever management introduces a change, things seem to get worse before they get better, or get worse followed by further change in hope of fixing the newly created problem. Sometimes it just spirals out of control from compounded reactions and feels like there is no way to pull out before the crash. It is interesting to look at this phenomena from a systems perspective. I am going to use a lot of "geek speak," but I think the analogy helps understand what causes spiraling out of control and how to prevent it. So put on your propeller hand and be assured you can take it off when you reach the end.
Linear feedback systems are characterized by a mathematical model of the relationship between inputs and outputs. Signals in the model have two attributes: amplitude and phase. Amplitude is the strength of a signal and phase is the time signal change occurs. Management systems are complex, but have a similar characteristic, there are actions with direction and force, and reactions occur in time, usually delayed time.
It is desirable for a system to have a simple relationship between input and output, such that the output follows the input. When the input increases, the output increases, and visa versa. However, in some systems when the input increases, the output decreases at first, then increases later. The mathematical description of this is a Right Half Plane Zero. The physical cause is related to energy storage and delay.
In some power conversion circuits, energy is stored during during one time period, and transferred to the output during later time period. Other conversion circuits store and transfer at the same time. The circuit that delays the energy transfer has a Right Half Plane Zero and behaves by decreasing output before increasing output in response to an increase in input. Management systems are full of energy storage and delay, and behave in a similar manner.
Let's look at an example from a start up I was employed by many years ago. This startup manufactured semiconductor devices. Each month wafers would exit the FAB and were sliced, packaged, and tested. There was a pattern of wafers exiting the FAB the first week of the month and parts shipping on the last day of the month. The accounting system produced metrics each month. The board reviewed the numbers and managed expectations of the stock holders, in this case the VC's. There was huge pressure to have good numbers each month to maintain confidence in order to get another round of funding.
Unfortunately, this was an inefficient operation because flow was not smooth. I suggested that we smooth the process, which means packages ship after the end of the month during the time that new wafers are exiting the FAB, thus removing the peaks in the testing process. How would the suggested change impact the system? First, there would be a delay in shipments of one week. This would be a one time delay. This would be followed by increased efficiency which would eventually increase the plants capacity. From a systems point of view, the response is decreased output followed by increased output. A side effect would be poor metrics for one month, followed by better metrics each month as the improved efficiency took effect. Also, some customers may be angry with delays.
Like the power converter, this system had a storage mechanism followed by a delayed transfer. Silicon processing is batch oriented. A batch exits the FAB and is packaged. Further batches leave packaging and go to test. Then these batches are re-batched and shipped. The change in delay caused same effect as a Right Side Plane Zero. This phenomena is very common. If customer demand increases suddenly, working capital increases, inventory drops, etc. We deal with these problems by buffering with inventory, building lean systems, improving predictive analytics, etc. The Bull Whip effect in a supply chain is another example of energy storage and delay, but in this case leads to oscillations.
Things get really bad is when management fails to understand these dynamics and react to the temporary decrease in output/performance or increase in cost. This leads to oscillation like the Bull Whip effect, or worse. In some cases if the reactions continue, it leads to negative feedback and the system no longer self regulates and self destructs. This happens when a change temporarily decreases performance, and before the system switches to increased performance, there is another change which results in a second more intense decrease in performance, followed by another... driving it into destruction. Sometimes fear is the real force behind the reactions.
There are a couple of ways to avoid spiraling out of control. One way is to recognize the pattern and wait long enough to learn whether the reduced performance is temporary and will self correct like the Right Half Plane Zero, or continue. If it continues, the relationship between the change and its effect may not be understood. A second approach is to anticipate the effect and counter it with another change. In engineered systems, this is called feed forward. Essentially, you compensate for the Right Half Plane Zero by providing another path that allows rapid changes in input to bypass part of the system and go straight to the output. A third approach is to remove the Right Half Plane Zero by redesigning the system.
Here are some recommendations:
If you don't understand a management system, delay reactions to change and study its behavior. Don't over react, and certainly don't let reactions compound. You must learn the relationship between input and output. Also consider making only one change at a time.
If your business model will allow it, go lean. Lean effectively will speed up the effect of the Right Hand Plane Zero or eliminate it. If the effect happens fast enough, you will reduce the chance of reacting to a reaction.
Use feed forward. In the start up case, this implies that for several months you work hard to build some inventory. Then when the system is changed, use the inventory to avoid late shipments and poor metrics. Manage management expectations and let them know that the system will still have some temporary swings in metrics that will settle out within a few months, but the magnitude of them will be smaller than just making the change and taking the hit all at one time.
Well, enough "Geek Speak." I hope you see the value of systems thinking and not overreacting when making changes. Remember that human systems are more complex than mechanical or electrical ones. The same principles apply, but the complexity demands more attention and experimentation.
For non geeks out there, a good resource is Peter Senge. Examples of system behavior are given in less geeky terms.
At Finishing School, there was a fair amount of discussion about Management vs. Leadership: Management being a control mechanism, and Leadership being a direction mechanism. This balance was reflected in the M in the MBA degree, and L in the department name. However, in real life I run into many situations where people resist control or don't follow. The default assumption seems to be that if people resist, the manager lacks skills, and if they don't follow, the leader lacks charisma or social intelligence. Therefore, the finishing schools are making big bucks teaching people how to manage and lead.
More often than not though, when management and leadership don't produce results, there is an impasse or block. In individuals an impasse can be the result of fear, personality, over emphasis on character, etc. An organization might be blocked by structure, habit, control mechanisms, or lack of resources. When an impasse occurs, often times management asserts itself, but this makes the problem worse, as it reinforces the blocking mechanism. Leadership is the better tool, but because it is relies on inspiration, it can be weak.
So what can we do? Basically, this is the where the missing leg of the stool comes in. The third leg is the therapeutic aspect, but a more active and approachable term is coaching. The role of the coach is not to manipulate, dictate, or inspire. The role of the coach is to raise awareness. When individuals and organizations become more aware of themselves, the blocks typically resolve on their own. The reason is that an impasse is normally maladaptive behavior that distances the individual or organization from reality, thus they are responding to a disconnected and imagined reality.
(This is not to discount the role of imagination in leadership and creativity. There is a difference between being well grounded in reality and imagining a future that we can aspire to. I am talking about a sterile imagination that inhibits creative action. We all know what it feels like to be around people and organizations that are stuck vs. creative and fun ones.)
There is a lot talk these days about innovation. Every Linked In group under the sun has someone with a theory about how to make innovation happen. A recipe, and technique, a system. However, the solutions are typically based on the false assumption that innovation can be "made" to happen. If you buy into much of this advice, you will have a double impasse, yours, and the original maladaptive behavior. When you see lack of innovation, look for what is blocking, and try to raise awareness. Once the block gives way, then you can use your management and leadership skills you paid so dearly for.
What I am saying implies that innovation is a natural and healthy response to our environment, not a program or procedure. Like all living things, you cannot command or lead them to grow. If you don't believe me, try it on your house plant. I'll stick to sunshine, water, and nutrients.
I spent the afternoon on the exposition floor at APEC to see what is going in digital power. In particular, I was looking for general purpose technology that would help in building isolated digital controlled switch mode power supplies. In particular, I was interested in technology to roll your own controller. Maxim showed a state controlled offering, but unless you have $$$ in your hand, high volume in your pocket, and a gold pen to sign an NDA, all you could do was see a couple of waveforms and come back in Q3 to get a data sheet.
The platforms that were on display were:
- TI's Piccolo Real Time CPU
- Microchip's dsPIC
- Microsemi's SmartFusion
- Cypress' PSoC 5
Piccolo is a 32 bit real time microcontroller with a 150ps resolution PWM, 12 ADC, comparators, I2C, Flash, control law accelerator, floating point, complex math, etc. Pricing is about $3. dsPIC is a 16 bin digital signal controller with 10/12 bit ADC, DACs, I2C, PWM, Flash, MAC, etc. Pricing is similar to Piccolo.
Both of these offerings are basically a microcontroller with support for digital control applications. Microchip ran one of the educational sessions and pointed out that a CPU designed for digital control had to have optimized IO and timer system to minimize delay from the sample and hold to ADC, through the compensator to the PWM. The pipeline is about 4uS. It seemed clear that with this delay, feedforward mechanisms are pretty normal. They also touted their flexible PWM that could be configured in many different ways. I have not looked at Piccolo's PWM architecture, but this would be something to look at in addition to the total delay. Both the Piccolo and dsPIC have pre-canned libraries that can do PID or similar analog control techniques digitally, but one can go pure digital.
SmartFusion has an ARM microcontroller, an programmable analog with ADC, DAC, current monitors, temp monitors, comparators, and FPGA. The guy doing booth duty said that customers like it because they can roll their own PWM in the FPGA and play other tricks to differentiate their end products. The drawback is if you don't want to make your own PWM, your a bit stuck. (Note that you will not find these on the Microsemi site, you have to use the Actel site. You would think that a key word search on the Microsemi site would find SmartFusion, but no...) Microsemi indicated that there are a dozen or so new designs using SmartFusion, which released 9-12 months ago, so this is new stuff. Pricing is $40-$50. Perhaps that is why there are so few customers. However, given the price of FPGAs, this price makes some sense, just don't use it in a low cost SMPS.
The PSoC5 is a general purpose PSoC, but there is also a PowerPSoC, the PowerPSoC has PWMs and a Hysteretic controller. The block diagram does not show any CPU. Looks like a building block system for simple stuff like LED lighting applications. The PSoC5 has a PWM and some digital blocks. The basic blocks seem to support digital control. Pricing on the PSoC5 is not available yet, or at least I could not find any prices, and the datasheet is marked preliminary. Development kits seem to be for sale on the Cypress website. Older PSoC prices are in $5 range. I'll guess these are in the $10+ range.
It was quite clear who was tuned in when it came to digital control. It was clearly Microchip. They taught a digital control seminar. They were totally excited about dsPIC applications and had real advice about design tradeoffs and performance. TI was promoting a broad offering and not focused specifically on digital control. Microsemi did not have anyone in the booth that deeply understood it. Cypress was asleep at the wheel. (Maxim was peeing their paths over their new chip, but no details or datasheet. Just a "trust me, this is so cool", all though it might be a one trick pony.)
My take, without a deep dive into these device's capabilities, is that Microchip and TI are probably the most worth looking into as SPMS controllers. Microsemi's offering is quite new, there are not very many customers, so it will be hard to leverage any experience outside Actel. Cypress is a wild card, but if they are not engaged at APEC, it is hard to take them serious.
One could also use an FPGA with MAC, such as the Spartan 3e, but that is probably much more work, and they are not cheap. I'm betting the dsPIC would be fun to play with and will probably buy a starter kit and play around.
Do values matter to New Product Development? I suspect most people would answer yes to the question, but not agree on what those values are. Let's work by analogy and see if we can sniff them out. An example from manufacturing will be used to tease out the four values of Efficiency, Effectiveness, Design, and Optimization. These will then be applied to product development, context will be discussed, and some observations will be given.
I spend a lot of time in semiconductor manufacturing environments, in particular final electrical test. Test floors in Asia are usually organized as a grid of work stations with operators managing multiple workstations. Each work station has a robot called a "handler" and a "tester". The handler moves product and the tester finds manufacturing defects.
The first picture is the input device. The second picture shows the robotic mechanism that moves devices between operations, such as rotate, test, mark, inspect, etc. Devices then exit the machine and are placed into reels if they are good devices, and a trash bucket if they are bad devices.
The market for semiconductors is very competitive and the equipment is very expensive. No one survives for long unless this process is very efficient. Units per hour is universally monitored and managed. Small improvements in throughput directly hit the bottom line. The core value is Efficiency.
However, end users of the devices care about quality. Devices must meet specifications. As always, the world is gray, and their are tradeoffs between test time and test quality. A more expensive tester can improve quality and throughput, but raise the capital cost. Averaging a measurement can reduce variations and reduce the probability of test escapes, but increase test time. When the goal is quality, the core value is Effectiveness.
Somewhere in the manufacturing organization someone worries about test strategy, capital allocation, market strategy, and competitive positioning. Decisions regarding purchase of buildings and equipment operate on a different time scale than everyday improvements. This core value is Design.
The last value is more holistic and organic. A well functioning manufacturing organization will maximize value and minimize cost, but you will find many localized sub-optimum processes. Someone at the top has the responsibility to ensure that all functions work as a whole to reach a global maximum. We can call this core value Optimization.
Values vs Behaviors
The reason I like to call these values rather than perspectives or behaviors is because people get attached to them. If you have effectiveness tendencies, how do you feel when you can't get something done because someone with an efficiency bug won't work around the process? And if you have spent your entire career improving efficiency, how do you feel when the someone wants to trash your beloved process when changing the organizational design or get something done a different way? These emotional attachments make them values, and they can be very persistent, even to the detriment of the organization.
New Product Development Example
How do these values relate to New Product Development? Let's change perspectives and consider the companies that develop the robotics and test equipment, and consider their development process.
Capital equipment is very complex and the design process is expensive. A typical system has mechanics, software, firmware, and electronics. With long development cycles, the risk of delivering a product that does not sell is large.
First, consider the software. Suppose the team uses an agile process. Sprints are by design a process of translating requirements into deployable code. Well designed sprints are very efficient. Roles, process, and tools are well defined. At the end of each sprint there may a postmortem where improvements to the process are discussed. At this level there is not a lot of work on effectiveness, because it is all about execution efficiency. (Agile guys and gals: please don't beat me up over the simplification, I know it is not as simple as this.)
The sprints are consuming a backlog of requirements, typically managed by a project owner or product manager. The product manager is responsible for delivering maximum value to a market or customer. There is some negotiation with the scrum master over the backlog, effectively managing the tension between delivering value value, vs. managing execution. This is all about effectiveness, and managing the relationship between efficiency and effectiveness.
When it comes to mechanics and electronics, things work a bit different. The cost of iteration is too high to run an agile team, so much more upfront requirements effort is required. This changes the dynamics for the product manager, who must spend more time in the field understanding the market and context the equipment will be used in. The product manager must generate an effective product definition, then hand it off to an efficient development team.
Both values of efficiency and effectiveness are in play, but there emphasis and time relationships have changed due to the nature of the product. Agile processes can relax the up front need for effectiveness a bit and depend on feedback coupled with efficiency. A staged process with high iteration costs can relax efficiency a bit and depend on a very effective front end that gets the requirements right the first time. (If you are in a fast moving market for hardware based products, start thinking platforms as a way to deal with the mismatch between the need for speed and need for upfront definition.)
Design plays a larger role in developing capital equipment. The hugh capital risk puts pressure on strategic thinking, because a bad bet can result in company failure. Design of the organization may be customized on a product by product basis. Optimization will play a smaller role, due to the small number of products in development. With less routine and data, optimization is difficult.
Dynamic Nature of Values
The four values of Efficiency, Effectiveness, Design, and Optimization are universal. However, their emphasis and interrelationships are context dependent. In addition to context dependency, individuals have biases and tend to favor one above all the others. Group bias is self reinforcing, working against the need to change when the context changes.
My goal here is to create a language for discussion of values in the context of product development. The values are Efficiency, Effectiveness, Design, and Optimization. I propose using the language as follows:
- Look at your current situation and categorize behavior according to the four values. Ask where these values are found in your new product development, who holds, their relative strength, and how they relate to each other.
- Look at your market and assess how well the current values address this market in terms of product development. Do they create value for the market or hinder creation of value.
- Decide what shifts in values must take place to improve value creation.
My experiences seem to indicate that effectiveness in product development is harder to come by than efficiency. Efficiency is much easier to measure, and personal risk is much lower. If one is measured by conformance to to process, a product can fail in the market while one receives high marks. Such risk avoidance happens in other areas of product development. In "Leading Product Development" Wheelwright and Clark give the following reasons why senior leaders tend to get involved in new product development when things go wrong, rather than up front:
- Low risk/high return when they arrive late in the game.
- It's urgent and visible when it is in trouble.
- Firefighting skills are rewarded.
- It's exciting to put out fires.
- It is easy to be wrong at the beginning, but at the end if one fails to fix it, it is not your fault.
- A lot of knowledge is required be involved at the beginning.
- At the front end problems are not well defined, and tools/roles are not clear.
- Lack of metrics/long feedback paths means less feedback.
- Absence of urgency.
I think that effectiveness inherently involves risk, because it requires judgement. In a risk adverse environment, this skews the values toward efficiency. If senior managers are subject to this bias, I imagine everyone else is too. Yet, what I think works best is focus on effectiveness on the front end of product development, efficiency on the back end, and a well managed boundary between them.
This implies companies must deal with risk adverse attitudes that block effective behavior.
Four values that are important for product development are Efficiency, Effectiveness, Design, and Optimization. Emphasis on and relations between these values are Context Dependent. People and groups have Preferences and Tendencies that need to be managed. The Front End of product development tends to require Effectiveness and the back end tends to require Efficiency. Risk Aversion tends to Skew values towards Efficiency.
The core categories of Efficiency, Effectiveness, Design, and Optimization are inspired by Adizes. Please see his books if you want to dig deeper into his framework. Note that he uses different words and has different nuances than I do. I don't make any claims of representing his work, I just want to give credit to his work because it led to my categorization.
I love Starbucks. I even have a gold card that I proudly display at the register, except in Asia where it does not work. And the Asian cards don't work in the US. This is sort of a drag because I wanted to show off my card from Tokyo, but it does not work. But I digress...
Something that has been bugging me is how orders are taken. I first have to give my order the coffee wench, then I have to repeat it to the cashier. It has been like this for a long time. But something new has happened. Now the cashier must input all the small details of the order into the computer so it can keep inventory. Because the coffee wench holds the cup, I have to answer 3-4 more questions. Even worse, the line slows down, and the cashier is stressed out. No more friendly greeting.
Seems like Starbucks has designed procedures for its own benefit, not mine. Just like the times I have been kicked out right at closing time because the employees must follow the rules to stay out of trouble.
Has Starbucks Tuned Out?
Dealing with Assumptions is the time consuming aspect of Reverse Financials, covered in my previous post. Managing assumptions is really about dealing with uncertainty and risk. Let's start by making a distinction between uncertainty and risk.
Risk is uncertainty with a downside.
Uncertainty is your friend. It gives you the opportunity for an upside and a way to beat your competitors. Risk is your enemy. It creates the possibility of loosing accumulated capital. The goal is to turn risk into uncertainty.
Before we dig into the Reverse Financials, let's take a look at uncertainty management in general. The framework I use comes from Hugh Courtney's book 20/20 Foresight. You can read the book if you want a deep dive.
Uncertainty can be categorized into 4 levels.
Level 1 is complete certainty. This does not mean you know everything, but it means everything can be known with enough certainty that you can make decisions without considering uncertainty. In this world, to the extent it still exists, you can use discounted cash flow, Porters, SWAT, and all the traditional tools to make decisions. Life is like a chess game. Whoever is best at reading the board and making strategy wins.
Level 2 consists of a set of mutually exclusive collectively exhaustive (MECE) outcomes. This represents standards wars, regulatory changes, strategy moves in some more stable industries, etc.
Level 3 is bounded outcomes, a range of outcomes. Market share falls into this category.
Level 4 amounts to unbounded outcomes.
Assumptions in Reverse Financials
I want to draw attention to a couple things. First, most of the uncertainty we will deal with in Reverse Financials will be Level 2 and Level 3. Second, product strategy will effect uncertainty. For example, in a disruptive innovation, you probably have more time than in a sustaining innovation. Most companies are scared of disruptive innovations and will watch from the sidelines and will be a fast follower, or a me too. With a sustaining innovation, preemption is far safer, so delaying can have drastic consequences.
The first step in dealing with assumptions is to categorize each assumption by the level of uncertainty.
Let's go through each item in the Uncertainty column from bottom to top.
The first item is General Overhead with Level 1 uncertainty. Management has determined the overhead % and is not going to change. Therefore, this is an assumption that can be ignored.
The next item is a burden rate, probably covers manufacturing. The spreadsheet shows a range of 20% to 30%, but it is categorized Level 1. The implication might be the number can be determined and the range removed, or there is simply a mistake in the analysis so far. It is best to do the research and find out which is the case.
The next item is raw materials we a Level 2. Engineering has proposed several product architectures. One of them includes creating a new platform, several of them propose using an existing platform along with several choices for reusing components from previous product designs. These choices affect cost, development time, and performance. If you are a product manager, this should keep you up at night. The Level 2 uncertainty from engineering affects product value and go to market strategy, implying that the uncertainty of revenue depends on choices made by engineering. (Everyone knows this intuitively, so there should be no surprise here.)
Moving along we get to sales support and cost of warrantee, etc. These items are just not very predictable and nobody has any way to make the uncertainty Level 2, so they are level 3. They are bounded by experience.
We now get to cost of sales and marketing. This is Level 2 because there are some basic choices around using internal sales, reps, and other channels of distribution. However, each channel is reasonably well understood, so this is not Level 3.
Finally, we get to revenue, which is Level 3 uncertainty, but it has some Level 2 characteristics due to Level 2 uncertainty within engineering. As is usually the case, this is the hardest uncertainty to deal with, as information is so much harder to get compared to internal affairs.
Dealing With the Uncertainties
Courtney categorizes strategies into three questions:
- Shape or Adapt
- Now or Later
- Focus or Diversify
I will ignore the third question as my concerns here are more tactical and "focus or diversify" applies more to overall strategy and portfolio management. However, one could make a strong argument against this claim, so feel free to do so :-)
Let's consider the overall question whether to shape or adapt with respect to uncertainty. Because we are developing a product, we must consider uncertainties that are external to the organization differently than those internal. Internal uncertainties are easier to shape than external ones, but not always. There are no rules that say you have to answer the same for both internal and external uncertainties.
Now or later does not have so much independence. If your strategy for dealing with external uncertainty is "now", it is pretty hard to apply a "later" strategy with internal uncertainties because they are bounded by the external timing. There is a similar boundary in a "later" external strategy. Waiting carries the risk of preemption. A competitor can always be first, and there is no way to undo your delay.
Because of the asymmetries, let's start with external uncertainty. We have a Level 3 uncertainty, which means we have a range of possible outcomes. The overall worst case scenario in the spreadsheet says we can have a return on sales of - 5%. The first thing we might do is a sensitivity analysis to get some feel for where it hurts most.
|Tech sales Support
|Install, Warr, Training
|Sales and Marketing
These values show the resulting change from a 10% change in each parameter. Because revenue and raw material costs are interrelated as discussed above, we can address those first. Let's start with revenue.
The initial post said this was a disruptive innovation. The first question is: can we convert this into a Level 2 risk and use a shaping strategy? Possibilities might include patents, industry standards, tying up a critical resource, or a network externality. If this is possible, it is probably better to shape than adapt. A scenario analysis would prepare for each Level 2 outcome and uncover ways to shape the outcome.
If there is no way to convert to Level 2, it still might be best to shape, but it also might be better to delay decisions using real options techniques. A real option creates a option to execute in the future when there is better data. For example, a critical technology might be developed, but the decision to develop a product might be delayed until the environment is ready. A product launch might be delayed to time the market. Multiple options might be created so that with more data, one option may be chosen and the other discarded.
Part of the analysis is deciding on whether you are making a big bet, or managing downside. Also, you must know if your organization is capable of the strategy. Can your organization support a big bet? Do you have to make a big bet because you are a startup and don't have the cash required to finance multiple options? Are your managers flexible enough to adapt to an unfamiliar strategy?
Let's look at the raw materials uncertainty. It was specified as Level 2. In this case the designers have a finite number of technology and architecture choices and they affect the cost structure. The same issues apply. A scenario analysis can uncover assumptions and issues with each choice. It might be possible to create options by developing prototypes of several architectures in parallel (set based design), then evaluate them, considering how it relates to the revenue risk management strategy.
We must also consider how the design strategy relates to the market strategy. If the market strategy is to shape through an industry standard, the design team has to architect around standard. The design team may want to prototype a couple of options based on guesses as to the final form of the spec. Then wait and see how it plays out. Once the standard is near acceptance, execute on the prototype closest to the standard.
The main point is that all assumptions are uncertainties. Uncertainties should be analyzed to uncover their characteristic, and strategies should be formed to manage it. You have to decide whether to shape or adapt, and whether move forward now or later. You must apply the proper tools for the uncertainty level. And you must align the strategies where they interact.
Other Ideas on Assumption Management
My main point of reference is Discover Driven Growth. McGrath proposes managing assumptions using an options approach. Information is produced by learning, which lowers downside. Checkpoints define strategic places to stop and evaluate, which is a decision point where you stop, pivot, or purchase a new option.
The fundamental concern I have with a process tuned to real options is that it inhibits big bets, and tends to emphasize adaption over shaping. While this might work in many situations, any process that is focused on one approach for managing uncertainty has the downside of misapplication. Like all tools, context matters. So my take away is simply this, use reverse financials and assumptions, but manage assumptions with a rich risk management toolkit and don't commit your process to any one tool or strategy.
However, if you must have simplicity, an options approach is probably best, because most market risk is Level 3. Another assumption to manage ;-)
Causal financial models lend themselves situations where inputs to the model are well known. Sustaining innovation falls into this category. But what kind of models should one use for disruptive innovations? This post will demonstrate risk based models intended for disruptive innovations where model inputs are not well defined.
Jose Briones uses the following Project Categorization in his Beyond Stage-Gate presentation:
Jose then categorizes the financial analysis into three levels going from the top right corner to the bottom left:
- Level 1 - Reverse Income Statement/Real Options
- Level 2 - Probabilistic Decision Analysis
- Level 3 - NPV/DCF
The message is clear, the more certain your innovation, the more you can use traditional tools. Larry McKeough addresses Level 3 in his Rocky Mountain Product Camp 2010 Presentation. However, Larry's DCF spreadsheet tool considers assumptions, so even his tool recognizes the presence of risk in low uncertainty innovations.
Risk Caused By Disruptive Innovation
Christensen uses the following model of disruptive innovations as shown in my Rocky Mountain Product Camp 2010 Presentation:
There are two risky places to innovate. The first is the low-end or new-market disruption. Both of these have considerable market and technical uncertainty, as shown by the circles below. Each of these strategies involve new value networks, new customers, and new definitions of value.
The second place is when sustaining innovation pushes performance beyond user needs and the supply chain begins to reconfigure itself to supply flexibility and speed to the market. When platforms disaggregate and margins shift between players, look out!
DCF is not well suited for these situations, especially the low-end and new-market disruptions. A better tool is reverse income statement and assumption management. The remainder of this post will walk through a reverse financials statement. A follow on post will address assumption and risk management.
Let's proceed by building out a spreadsheet step by step. I will roughly follow McGrath's example from his book Discovery Driven Growth. Assume the product is a $100K machine used in manufacturing lines. First, we will frame and scope, then work on deliverables, and finish with the reverse financials.
Framing and Scoping
Management has stipulated $1M in operating profits with a 17% return on sales (ROS) and a 20% return on assets (ROA). This results in a $5M allowance for assets, and requires sales to be approximately $6M. Given a $100K selling price, this implies selling 5 systems a month. Two questions follow:
- Can manufacturing build 5 systems a month?
- Can ROS/ROA be maintained?
The benchmark range of return on sales is 25% to 3%. The data came from the public financials of companies in the same product category. This should immediately send shivers down your spine! You should start asking yourself questions like: how do we insure our product lands in the good side of the range?
Notice what has happened so far. Instead of creating a bottom up plan that results in a ROA/ROS figure, we start with a required ROA/ROS and ask, what does this imply? It implies a sales level of sales of 5 systems per month. It implies assumptions about ROA/ROS that may not be real. Not only does the benchmark data have a wide range, but if this is a disruptive innovation, benchmarks may not even apply.
Now have to take the next step, which is the deliverables specification. We have to start breaking down the overall assumptions into smaller assumptions we can manage.
Starting with assumption F10/A3, the sales team has predicted the cost of sales and marketing is 15%, with a range of 13% to 17%. The left side (F10) says that to meet the original ROS/ROA, we require 15%. The right side (A3) says that the possible range is 13% to 17%. Therefore, we have to manage the assumptions on the right side to either meet the requirements on the left, kill the project, or improve some other assumption to compensate.
Using the data from the deliverables, a reverse income statement and reverse balance sheet are created. The left side shows a return on sales (F40) of 25% and a return on assets (F47) of 44%. This is better than the original 17% and 20% requirement. Therefore, if the left side of the spreadsheet holds, we have a project. But as my father used to say, IF is the biggest word in the dictionary.
Look at the worst case on the right side. return on sales is -5% and return on assets is -6%. Assumptions matter!
If this were a sustaining innovation, the assumptions would be reasonably accurate with small ranges, and DCF would be a great tool. Diffusion curves could be used to estimate revenues. An IRR could be generated and compared to a weighted cost of capital. Small uncertainties could be accounted for by running a Monte Carlo analysis on the DCF.
We basically framed/scoped management requirements for ROS and ROA. We then worked backwards and created financial deliverables that meet those numbers, and we treated the deliverable components as assumptions with ranges of values. Finally, we created reverse financials that show the best and worst case ROS/ROA.
- Don't use the wrong tools. Your classic MBA tools don't always work.
- Work backwards from goal to assumptions.
- Manage assumptions.
Where do we go from here?
The next post will discuss basic risk management concepts and address how to deal with the assumptions of reverse financial statements.