The Nickel Tour

With all the stuff going around about the bird flu, I am reminded of one of the less glamourous management consulting projects I heard about (in general terms) last year in the turkey business.

Now in many operations projects, key goals are to improve business processes in dimensions such as:

  • average throughput
  • inventory backlog
  • peak capacity
  • quality
  • cycle-time
  • cost
  • risk/failure points

A company often has tons of business processes in place. Sometimes there may be a manageable set of predominant process flows, but then there can be a zillion microflows. One way for a consultant to get grounded in a situation in the face of this complexity is to go on a "nickel tour" with the client .

In the case of the management consultant I met with, the goal of current project was to reduce the number of injuries in the processing plants of one of the big turkey producers (I presume to reduce lawsuits, etc.). The automated equipment in certain sectors of the meat business, as I understand things, can be quite scary. Not for the faint-hearted for sure, some of the equipment used can separate the meat from bone (of entire animals) in matters of a few seconds. Imagine what can happen if your arm gets caught in the machine …

So day 1 the consultant arrives on the scene, and one of the plant workers hands the consultant a pair of rubber boots to go on a "nickel tour" of the plant. I don’t think the tour was of the slaughterhouse, but one can imagine that the scene was not everything a recent MBA grad dreams of doing as a consultant.

To generalize, in many nickel tours, the client walks the consultant through the backoffice, introduces sales personnel, has them sit in with customer service representatives, attend working meetings related to information technology user sessions, etc. The purpose is to give the consultant a ground floor view of what happens in the business (plus an opportunity to ask questions). The nickel tour helps to compress a complex view of the business into one short experience. While a lot of the tour can turn out to be a bunch of chit-chat and small talk, I have often turned the nickel tour into a very useful experience. The nickel tour can be a very valuable source for initial checkpoint information for the consultant (e.g., if the consultant sees large piles of inventory, frazzled or distressed workers, disorganized workspaces). A consultant may also meet people on the tour that can serve as useful sources of information later in an engagement.

On the flip-side, a consultant needs to be wary of "stage plays". This is a case where the nickel tour is not a real tour of operations, but a case where someone (within the client operations) has dressed up the situation to be different or better than it really is on a day-to-day basis.

In any case, make sure to think about giving or getting a nickel tour in a consulting relationship. Although it is not always possible or desirable in some cases to give a nickel tour, a nickel tour can really help consultants get a "live" feel for the business at hand.

Somewhere Between Goal Management, Client Facilitation, and CYA

As I work through my own professional goals plan for the year, I am reminded of an early lesson in management consulting. This technique has do more with firms that are implementation-oriented as opposed to strategy-based, and it is related to getting a client organization to move. I don’t know if there is a name for the technique, but for the purposes here, I’ll call it "progress to goal management".

The essence of the technique begins with the consultant working with the client organization to establish one or more measureable goals and then reporting on actual performance regularly with a gap diagnostic. Let’s say a goal is to create $5.5 million in annual revenue for a new start-up initiative (say $500K in Q1, $1M in Q2, $2M in Q3, and $2M in Q4). The consultant then works with the client to put a regular measurement system in place, say monthly. Suppose that by the end of Q1 that the client has only achieved 15% of the goal. The consultant should be working with the manager owning the revenue to report not only the numeric gap in performance but also a diagnostic of why things are off track (from both quantitative and qualitative perspectives).

Although the technique may seem obvious, in implementation consulting, one is often trying to diagnose problems or set up operations for things that happen below a corporate-level or business unit-level. The devil may be in the details and low-levels of the company so-to-speak. Thus, things like board-level measurements and control may neither be enforced nor visible. Other situations where measurements may not be readily available include setting up new business structures (e.g., new business line) or bypassing old business structures (e.g., where old methods too cumbersome or bureaucratic).

Other than trying to help the client (which is the primary goal, of course), the flip side of this is that the consultant is pulling a "CYA" in some sense. One can’t start a project and then six months later show up and simply report to executive management that the goal wasn’t met by the business unit or functional area management. Progress to goal (and gap) reporting is needed every step of the way along with mid-course control and corrections. In this way, the consultant separates the aspects of proper management control from management’s ability to execute.

Schools Of Thought In Management Consulting

These days I’m in an environment where I’m working with other management consultants, working with those formerly with other management consulting firms, and experiencing the second-order effects of management consultants currently in the business. Being immersed in all of this led to an interesting discussion on two "schools of thought" in management consulting regarding styles of either 1) laying out options for clients or 2) making recommendations. (I should note that by highlighting these two schools of thought, I don’t mean to imply that every engagement lends itself to one style or the other – for example, many implementation consulting engagements may simply be change management oriented or facilitative. Strategy engagements, on the other hand, typically lead to decision-making crossroads, etc. where a pre-dominant school of thought would play out).

Now sometimes people have drawn similarities between consultants and doctors in the way that advice should be dispensed. Now, I’m no medical doctor, but I doctors are typically trained to lay out options for patients, not to make decisions for them.

I am generally with that school of thought. That consultants should lay out options for clients with the qualitative and quantitative tradeoffs, risks, and benefits. It is the client’s responsibility to make a decision. In fact, some clients would be put off by an outsider telling them what they should do.

But I see that there is another school of though that depends on either the consulting partner leading the engagement or the client wishes. Frankly, some clients believe that if consultants work for them that the consultants should also state their own opinions on how to proceed, e.g., what would the consultant do if the consultant were in the client’s shoes.

This is delicate ground, but to be frank, I have at times stated my personal opinion while distancing my personal opinion from the facts presented. I also make sure to caveat my personal opinion with any biases that I may have, while also emphasizing that I have kept personal biases out of the analysis given to the client.

Maybe people have different opinions on how they’d like consultants to treat them. Or perhaps people have opinions on what they prefer doctors do when giving diagnoses. I have not seen any scientific analyses of how the school of thought I’ve presented affects client satisfaction or client propensity to enlist services from a consultant, but I would venture to say that perhaps I’m too old school on the consultant and doctor front.

Blogs In Management (Also Management Consulting Blogs?)

I just discovered the blog of management guru David Maister, acknowledged as one of the world’s authorities on the management of professional services firms. I particularly like David’s Fast Company article from 2002, "Are All Consultants Corrupt?" because it touches on topics that one needs to address regularly as a management consultant, particularly about how can one ensure that one produces services that one can be both proud of from an ethical point of view and a quality of product perspective. To this, all I can say is that one should leave the management consulting profession if ethics and quality can’t be met.

But the real purpose of this post was to point to David’s post on internal blogs as a management tool. His text here gets at a real pain point linked to diseconomies of scale in management:

As firms get larger, more dispersed and more complex, the disaffection of partners (in professions and businesses of all kinds) is becoming more evident. I get calls all the time enquiring about my availability to consult on the issue of partners’ unhappiness and their feeling that they are treated like employees in an increasingly corporate culture.

I am a believer that blogs can help with this sort of thing (essentially flattening the communication structure associated with complex organization structures). That said, blogs are not a panacea for organizations and managers that do not know how to 1) use written communications to complement the management style and 2) deal with the semi-structured and dynamic nature of the blog medium. These latter items are table stakes in my opinion, but they can be easily underestimated.

In the comments section of David’s post, I was also encouraged to learn of a tip that Ernst & Young may be using blogs internally. I have blogged before about consulting firms using/not using blogs (e.g., here, here, here). It’s good to hear of more activity in the consulting area and to learn of consulting/management blogs like David’s.

Black Cats, White Cats, & Diversity Musings In Consulting

This Martin Luther King Day I reflect upon my family’s white cat and black cat. The younger white cat picks on my much older black cat – can’t quite say that it’s a state of harmony, but at least there’s no blood. I enjoy both cats, each for different reasons.

The term "cat" is also used by musicians quite a bit. One of the (black) drummers that I am currently studying is Dennis Chambers. In his 1992 book, "In the Pocket", Dennis wrote, "One thing I liked about Miles [Davis] is that he finally realized that there are some funky white cats. A long time ago when I was coming up, if you wanted somebody to play funk, you hired a black guy. It was unheard of for a white guy to play funk. White guys were playing rock and roll or whatever."

Now the term cat is not really used to refer to management consultants. I’ve heard other terms used, such as "guns", "mercenaries", etc. What I will say is that the management consulting field tends to be a white male-dominated profession. When prepping for interviews with consulting firms during b-school in the late 90s, I recall looking at some of the brochures and websites of management consulting firms, seeing the non-diverse pictures of employees, and thinking something to the effect of "entering the consulting field is going to be a little bit of a shocker". Now I’m not the only person that has observed the non-diverse aspects of the management consulting field. For example, check out this message board post and this article.

Now the skewed demographic makeup of the executive ranks of Fortune 500 companies has been covered by many others. So perhaps the makeup of many consulting firms should not be a shocker. Whether the makeup of many consulting firms and corporate offices is right, I dunno, but I will say that when finding role models to follow within these businesses, I’ve often had to look at people who are nothing at all like me, when I would have preferred to have had at least a larger set of people to look at.

Swearing By Or Swearing At Benchmarking

I thought that I would post something about a perspective that I have on benchmarking that might be somewhat of a minority position as compared to other management consultants. My thoughts were triggered in the context of an operations project where I am working with an ex-Booz Allen Hamilton person.

Benchmarking client company operations against comparable company operations helps to place the client company in the context of competition, quantify areas of differences, and provide a fact-based foundation upon which management decisions can be made.

I totally agree with this in theory. In reality, one can run into a ton of issues, some of which include:

  • Existing metrics and measurements were collected for a service or product which is not comparable to the service or product of current concern – This is somewhat ill-defined, but as an example, suppose that a operations process related to wireless data service is different from a support process related to broadcast video service.
  • Client or management pokes a hole in the validity of benchmark and undermines the credibility of your case – May be tied to a variation of the first one, but in any case, think about how strongly one wants to hinge the consultative study on a single benchmark source or data point.
  • Benchmark information cannot be readily found – Ok, one can go recommend starting a benchmarking study before taking any action as these need not be cost-prohibitive studies to conduct. Consulting firms can also leverage their knowledge bases and networks to bring light to the table. I just caution taking actions that delay progress when other methods for fact-building can be taken (perhaps in parallel with the benchmarking).
  • Client metrics don’t exist, are questionable, or are not comparable – After jumping through hoops, the client measurement systems may not match the requirements and measurement guidelines used by participants in the benchmarking study …

So given the issues above (and I have to say in my experience, every operations consulting engagement I’ve been in has run into at least one of these issues), I would suggest that the prospects of using other methods for operations analysis not be discounted too much (and these methods do not require benchmarking as a necessary condition):

  • Structural analysis – e.g., whether an operations process runs in parallel or serial and whether the processes can be run in parallel versus serial is something that can be factually analyzed and need not necessarily be compared to other companies.
  • Bottleneck analysis – e.g., when resources are waiting downstream for another resource that is bottlenecked and when this happens with a high statistical certainty, this can indicate that the work structure is not optimal (perhaps flattening of the work structure, cross-training, or better planning is desirable).
  • Trend analysis – e.g., if costs are going up, cycle-times to respond are getting longer, and customers are getting more upset, these are facts that one can respond to without having to have benchmark data to the nth degree up-front. Many companies may do an OK job at getting static data, but longitudinal data over time can also be very informative.
  • Consistency analysis – although not as cut and dry, the argument is along the following lines: if a company choses a particular strategy (e.g., being a feature leader versus low-cost provider), then there are some tactics that are inconsistent or less consistent with the strategy (e.g., being a feature leader and running below average R&D investment rates).

So while I will say that I am all for benchmarks, if one finds oneself swearing at benchmarks, then there may also be an opportunity to step back and apply some other principles that do not require as much benchmark information to be readily available. A tremendous amount of value can also be added using analysis methods like the ones I mentioned above. There are many other methods – please feel free to share some of your favorites.

Bonus link: good website covering balanced scorecard information as well as quality functional deployment (QFD) principles (the latter of which I have seen applied more in product management scenarios).

Sharing Web Information On Structuring Business Strategy Projects

I am currently working on a business strategy project in the wireless space. In solving a client’s problem, hypothesis formulation plays a key role in putting structure around one’s work. I’ve mentioned the (mutually exclusive, collectively exhaustive) MECE framework before, as a somewhat lower-level philosophy for structuring the hypothesis.

At a higher-level, however, I think the following site does a good job of highlighting from a project management and client engagement perspective the importance of up-front hypothesis generation work. One snip that is very key is the following:

In an effort to bring an assignment in on budget, project managers often fail to structure the problem-solving process up front. Instead of framing hypotheses for the team to test, they launch the team straight into analysis. Junior consultants and researchers, armed with laptops and presentation software, industriously produce charts that display findings. And then, during the week before a presentation, the project manager struggles to pull together the communication, develop the messages, and order the data and insights into a compelling, coherent, fact-based argument that will move the client to action, or fully inform him of the progress to date.

Paradoxically, the manager often finds he has both too much and too little information. Too much in that large amounts of the data and analysis do not support the essential story line. Too little in that support for key arguments is missing.

Do folks have other good public sources of information on the Internet that they’ve seen with respect to hypothesis formulation, presentation, best practices, etc. that they’d like to share? If so, please feel free to do so here.

Accenture’s Blog “Podium” And Other Thoughts On Non-Blogging In Management Consulting

Bartłomiej Owczarek points me to Accenture Netherland’s blogging space for employees. Very nice (and rare) discovery of a blog community that is very informal yet connected to a well-known corporate brand. Bartłomiej also shares his hypotheses on why there aren’t more management consultant bloggers out there:

  • because of time constraints
  • because they want it to be perfect from the start
  • because they live in a world full of policies.

I hadn’t really thought about Bartłomiej’s second bullet point before, but I could see how that might play a role in areas of a consultancy.

Bartłomiej, thanks also for the mention. Best!

Update (11/30/05):  Stephan starts a naked conversation and shares his thoughts on blogging under the Accenture umbrella.