Knowledge Space

Home » Best Practices

Category Archives: Best Practices

5 Questions to Ask Before You Present Analytics

No matter how experienced you are at analyzing data, communicating about your results can be a tremendous challenge.

waiting for the audience

So it’s not surprising that “Effectively Reporting Your Data Analysis” was one of the best-attended sessions at the inaugural Minitab Insights Conference last month.

The presenters, Benjamin Turcan and Jennifer Berner of First Niagara Bank, have a great deal of experience improving efficiency and enhancing revenue at their organization. They’ve also mentored many new data analysts and helped them learn how to present the findings from their analyses. In their presentation, they raised 5 questions that can help analysts communicate more effectively.

1. Who are you talking to?

The importance of knowing your audience was a central point of their presentation. When you deliver information, whether as a written document or as a presentation, it’s important to tailor it. Berner pointed out that we do that naturally when we talk to people.

For example, suppose you’re in an auto accident. The call to your family might go like this: “Hey, I was in an accident. Don’t worry, I’m fine, but I’m going to be late for the picnic.”

You also need to call your insurance company. The agent will want to know if anyone was harmed, but doesn’t need to know you’ll be late for the picnic. (Nor that you didn’t really want to go to the picnic anyway.) She will need specifics about the cause of the accident, the contact information for the other driver, the license plate of the other car, etc.

Your presentation is different for your family and for the insurance company. You tailor the information to suit each audience’s needs and concerns. The same should be true when you present the results of your analysis. Think about who your audience is, what information is most relevant to them, and how best to deliver that information so they will have the best chance of understanding it quickly.

To get further into this mindset, answer these specific questions about your audience.

2. How much do they know about what you’re analyzing?

One thing is for sure—your audience doesn’t know as much about your analysis as you do. In the session, Berner pointed out:

“You know that content inside and out, backwards and forwards because you’ve lived it for so long. But the people you are speaking to or writing to won’t know that content as well as you do.”

That means you need to look at the information with a fresh pair of eyes. How much of what you know do they need to know?

Imagine a friend calls to tell you he was in an accident. You know nothing about the accident until he tells you. You might assume that he was driving his usual car, but you don’t know that until he confirms it. Maybe he borrowed a friend’s car. Maybe he borrowed your car! (You knew you should never have lent him that key.) If so, he’ll be tailoring that information very, very carefully.

Similarly, if you’ve been analyzing a part-making process and you’re reporting your findings to the employees who actually make the parts every day, it’s probably safe to assume they’re going to intimately understand and care about every step of that process. You may need to share the detailed analysis you performed for each task. But your report to the C-level executives may require only a summary that provides the bottom line, quickly.

3. How much do they understand about analysis and statistics?

Stop me if you’ve heard this one. An insurance agent calls a customer and says, “Sorry sir, but your car was a total loss.” To which the customer responds testily, “No it wasn’t! I got several good years out of that car.” The customer didn’t understand that in the vernacular of the auto insurance industry “total loss” means something very specific—that repairing the car would cost more than its value before the accident.

Similarly, when reporting statistical results, think about what the audience understands and what they do not. For example, if your audience is not familiar with capability analysis, reporting that Ppk was 0.80 probably won’t be clear. (“Is that good?”) Instead, you could say the data suggest there are about 11,000 defective parts for each million that you create. (That’s a rounded value for the expected “overall” PPM [10969.28], as shown below.) Clearly, that’s not good.

Capability analysis

Keep in mind also that, just like other industries, statistics has its own vernacular that includes specialized meanings for very common words. Will your audience understand statistical jargon?

4. How will your audience react?

collision cause by ice cream truck

Suppose an ice cream truck strikes your vehicle. Your insurance agent explains that, because of the “Frozen Confections” clause in your policy, you’re not covered. You’re not going to be too happy. If the agent is smart, he’ll be prepared to quote the specific part of the policy that lets them off the hook. (If you’re smart, you’ll start shopping for a different insurance company.)

Similarly, if you’re reporting results that you know some people may not like, you should anticipate push-back and be ready to answers questions like, “Are you sure these data are valid?” or “Why should I believe you?” You may also want to look for ways to frame potentially negative information as an opportunity, rather than a problem.

5. Why should your audience care?

How will your audience use the information that you deliver? I once lost a car to a collision myself. When informed, my son used this information to lobby hard for his preferred replacement vehicle. I explained that while “Bugatti Veyron” is a really cool name for a car, we were going to look at cars with boring names, because I cared about the price more than the name.

Similarly if you want your analysis to spur an executive into action, you may want to work out some rough dollar figures about how much money the company stands to save. So in addition to reporting the number of defective parts, as we talked about above, you might also calculate that improving the process could save the company an estimated $250,000 in rework or scrapped parts annually.

Summary

You spent a lot of time and effort on your project and your data analysis, and sharing what you learned should be a rewarding culmination of those efforts. Taking some time to think about your audience and answer questions about what they need to know about your analysis will help ensure a better experience and better outcomes. Benjamin Turcan and Jennifer Berner’s presentation at the Minitab Insights Conference provided a great reminder of how important these questions are!

Advertisements

Six Sigma Fun: For Want of an FMEA, the Empire Fell

by Matthew Barsalou, guest blogger

For want of a nail the shoe was lost,
For want of a shoe the horse was lost,
For want of a horse the rider was lost
For want of a rider the battle was lost
For want of a battle the kingdom was lost
And all for the want of a horseshoe nail. (Lowe, 1980, 50)

According to the old nursery rhyme, “For Want of a Nail,” an entire kingdom was lost because of the lack of one nail for a horseshoe. The same could be said for the Galactic Empire in Star Wars. The Empire would not have fallen if the technicians who created the first Death Star had done a proper Failure Mode and Effects Analysis (FMEA).

A group of rebels in Star Wars, Episode IV: A New Hope stole the plans to the Death Star and found a critical weakness that lead to the destruction of the entire station. A simple thermal exhaust port was connected to a reactor in a way which permitted an explosion in the exhaust port to start a chain reaction that blew up the entire station. This weakness was known, but considered insignificant because the weakness could only be exploited by small space fighters and the exhaust port was protected by turbolasers and TIE fighters. It was thought that nothing could penetrate the defenses; however, a group of Rebel X-Wing fighters proved that this weakness could be exploited. One proton torpedo fired into the thermal exhaust port started a chain reaction that led to the station reactors and destroyed the entire battle station (Lucas, 1976).

Why the Death Star Needed an FMEA

The Death Star was designed by the engineer Bevil Lemelisk under the command of Grand Moff Wilhuff Tarkin; whose doctrine called for a heavily armed mobile battle station carrying more than 1,000,000 imperial personnel as well as over 7,000 TIE fighters and 11,000 land vehicles (Smith, 1991). It was constructed in orbit around the penal planet Despayre in the Horuz system of the Outer Rim Territories and was intended to be a key element of the Tarkin Doctrine for controlling the Empire. The current estimate for the cost of building of a Death Star is $850,000,000,000,000,000 (Rayfield, 2013).

Such an expensive, resource-consuming project should never be attempted without a design FMEA. The loss of the Death Star could have been prevented with just one properly filled-out FMEA during the design phase:

FMEA Example

The Galactic Empire’s engineers frequently built redundancy into the systems on the Empire’s capital ships and space stations; unfortunately, the Death Star’s systems were all connected to the main reactor to ensure that power would always be available for each individual system. This interconnectedness resulted in thermal exhaust ports that were directly connected to the main reactor.

The designers knew that an explosion in a thermal exhaust port could reach the main reactor and destroy the entire station, but they were overconfident and believed that limited prevention measures–such as turbolaser towers, shielding that could not prevent the penetration of small space fighters, and wings of TIE fighters–could protect the thermal exhaust ports (Smith, 1991). Such thinking is little different than discovering a design flaw that could lead to injury or death, but deciding to depend upon inspection to prevent anything bad from happening. Bevil Lemelisk could not have ignored this design flaw if he had created an FMEA.

Assigning Risk Priority Numbers to an FMEA

An FMEA can be done with a pencil and paper, although Minitab’s Companion software for executing and reporting on process improvement has a built-in FMEA form that automates calculations, and shares data with process maps and other forms you’ll probably need for your project.

An FMEA uses a Risk Priority Number (RPN) to determine when corrective actions must be taken. RPN numbers range from 1 to 1,000 and lower numbers are better. The RPN is determined by multiplying severity (S) by occurrence (O) and detection D.

RPN = S x O x D

Severity, occurrence and detection are each evaluated and assigned a number between 1 and 10, with lower numbers being better.

Failure Mode and Effects Analysis Example: Death Star Thermal Exhaust Ports

In the case of the Death Star’s thermal exhaust ports, the failure mode would be an explosion in the exhaust port and the resulting effect would be a chain reaction that reaches the reactors. The severity would be rated as 10 because an explosion of the reactors would lead to the loss of the station as well as the loss of all the personnel on board. A 10 for severity is sufficient reason to look into a redesign so that a failure, no matter how improbable, does not result in injury or loss of life.

FMEA Failure Mode Severity Example

The potential cause of failure on the Death Star would be attack or sabotage; the designers did not consider this likely to happen, so occurrence is a 3. The main control measure was shielding that would only be effective against attack by large ships. This was rated as a 4 because the Empire believed these measures to be effective.

Potential Causes and Current Controls

The resulting RPN would be S x O x D =  10 x 3 x 4 = 120. An RPN of 120 should be sufficient reason to take actions, but even a lower RPN requires a corrective action due to the high rating for severity. The Death Star’s RPN may even be too low due to the Empire’s overconfidence in the current controls. Corrective actions are definitely needed.

FMEA Risk Priority Number

Corrective actions are easier and cheaper to implement early in the design phase; particularly if the problem is detected before assembly is started. The original Death Star plans could have been modified with little effort before construction started. The shielding could have been improved to prevent any penetration and more importantly, the interlinks between the systems could have been removed so that a failure of one system, such a an explosion in the thermal exhaust port, does not destroy the entire Death Star. The RPN needs to be reevaluated after corrective actions are implemented and verified; the new Death Star RPN would be 5 x 3 x 2 = 30.

FMEA Revised Metrics

Of course, doing the FMEA would have had more important impacts than just achieving a low number on a piece of paper. Had this step been taken, the Empire could have continued to implement the Tarkin Doctrine, and the Universe would be a much different place today.

Do You Need to Do an FMEA?

A simple truth is demonstrated by the missing nail and the kingdom, as well as the lack of an FMEA and the Death Star:  when designing a new product, whether it is an oil rig, a kitchen appliance, or a Death Star, you’ll avoid many future problems by performing an FMEA early in the design phase.

About the Guest Blogger: 
Matthew Barsalou is an engineering quality expert in BorgWarner Turbo Systems Engineering GmbH’s Global Engineering Excellence department. He has previously worked as a quality manager at an automotive component supplier and as a contract quality engineer at Ford in Germany and Belgium. He possesses a bachelor of science in industrial sciences, a master of liberal studies and a master of science in business administration and engineering from the Wilhelm Büchner Hochschule in Darmstadt, Germany..
  

Would you like to publish a guest post on the Minitab Blog? Contact publicrelations@minitab.com

 

References

Lucas, George. Star Wars, Episode IV: A New Hope. New York: Del Rey, 1976. http://www.amazon.com/Star-Wars-Episode-IV-Hope/dp/0345341465/ref=sr_1_2?ie=UTF8&qid=1358180992&sr=8-2&keywords=Star+Wars%2C+Episode+IV%3A+A+New+Hope

Opie, Iona and Opie, Peter. ed. Oxford Dictionary of Nursery Rhymes. Oxford, 1951, 324. Quoted in Lowe, E.J. “For Want of a Nail.” Analysis 40 (January 1980), 50-52. http://www.jstor.org/stable/3327327

Rayfield, Jillian. “White House Rejects ‘Death Star’ Petition.” Salon, January 13, 2013. Accessed 1anuary 14, 2013 from http://www.salon.com/2013/01/13/white_house_rejects_death_star_petition/

Smith, Bill. ed. Star Wars: Death Star Technical Companion. Honesdale, PA: West End Games, 1991. http://www.amazon.com/Star-Wars-Death-Technical-Companion/dp/0874311209/ref=sr_1_1?s=books&ie=UTF8&qid=1358181033&sr=1-1&keywords=Star+Wars%3A+Death+Star+Technical+Companion.

Developing Key Performance Indicators

Key performance indicators (KPIs) are critical to ensuring a project team has the performance data it needs to sustain improvements. With KPIs, a team can evaluate the success of a project against its established goals.

Types of Metrics

There are two types of metrics to consider when selecting KPIs for a project: outcome metrics and process metrics.

Outcome metrics provide insight into the output, or end result, of a process. Outcome metrics typically have an associated data-lag due to time passing before the outcome of a process is known. The primary outcome metric for a project is typically identified by project teams early on in their project work. This metric for most projects can be found by answering the question, “What are you trying to accomplish?”

Process metrics provide feedback on the performance of elements of the process as it happens. It is common for process metrics to focus on the identified drivers of process performance. Process metrics can provide a preview of process performance for project teams and allow them to work proactively to address performance concerns.

Example of Selected KPIs

Consider an example of KPIs for a healthcare-focused improvement project:

  • Project: optimizing hospital patient length of stay
  • Outcome metric: hospital patient length of stay (days)
  • Process metrics: discharge time of day (hh:mm); time discharge orders signed (hh:mm); time patient education completed (hh:mm); discussion of patient at daily discharge huddle (percentage of patients)

In the example above the project has one primary outcome metric and four process metrics that compose the KPIs the team is monitoring. Well-crafted improvement project KPIs will include both outcome metrics and process metrics. Having a mix of both provides the balance of information that the team needs to successfully monitor performance and progress towards goals.

Teams should develop no more than three to six KPIs for a project. Moving beyond six metrics can dilute the effects of the data and make it more challenging to effectively communicate the progress of a project.

Questions to Help Select KPIs

Common questions coaches can use with teams to generate conversation about potential KPIs include:

  • What does success look like?
  • How will it be known if performance is trending away from goals?
  • What data would the stakeholders and sponsors be most interested in?
  • What data is available to the team?

The 3Ms: Meaningful, Measurable and Manageable

Coaches should keep the three Ms of crafting KPIs in mind when working with teams.

  1. Meaningful: KPIs should be meaningful to project stakeholders. Developing metrics that those closest to the project team find useful without getting feedback from a broader group of stakeholders can be a recipe for stakeholder disengagement. The KPIs a team selects need to resonate with the stakeholders closest to the process and the problem. The team will know it is on the right track when it has KPIs that stakeholders want to know the current status of and are discussing progress toward the project goals with their colleagues. Meaningful KPIs make excellent additions to departmental data walls for use in daily huddles and to support the efforts of leaders to get out on the floor and speak directly with employees. leader rounding.
  2. Measurable: KPIs should be easily measurable. Sometimes teams can get stuck trying to identify the “perfect” metric for measuring progress toward their project goals. In this pursuit, the team may lose sight of metric options that are already available or automatically reported. Sustainable KPIs should be relatively easy to obtain updates for. If a metric requires time-consuming auditing, or is not readily available to the project team, groups should think twice before selecting it as a KPI. Data that is challenging or time-consuming to obtain is not likely to be regularly updated and reported to stakeholders. Providing timely and accurate updates on KPI performance is an excellent way to support the sustainability of improvements and spark conversations about additional opportunities to enhance processes and reach the team’s goals.
  3. Manageable: KPIs should include metrics that are within the sphere of management control and influence for the project team. If the team selects metrics that include measuring process elements that the team has no control over, then they are not going to be measuring what matters. Teams should select KPIs that are within the scope of their project, are reflective of a successful outcome and are performance drivers for their work. Sometimes nice-to-have or might-be-interesting metrics can sneak onto the KPI list for project teams. These additional metrics are not needed; the team should focus in on the metrics that will provide accurate feedback on its performance.

Summary

Remember that successful KPIs:

  • Include a balance of outcome metrics and process metrics.
  • Total three to six metrics.
  • Are developed with the 3Ms in mind.

Crafting KPIs is an important step to guide teams through a continuous improvement process. A coach needs to keep the team focused on what success looks like and how best to measure it.

3 Ways to Gain Buy-In for Continuous Improvement

Research out of the Juran Institute, which specializes in training, certification, and consulting on quality management globally, reveals that only 30 percent of improvement initiatives succeed

 

And why do these initiatives fail so frequently? This research concludes that a lack of management support is the No. 1 reason quality improvement initiatives fail. But this is certainly not a problem isolated to just continuous improvement, as other types of strategic initiatives across the organization face similar challenges. Surveys of C-level executives by the Economist Intelligence Unit concur—sharing that lack of leadership buy-in and support can stop the success of many strategic initiatives.

Why Else Do Quality Initiatives Fail? 

Evidence shows that company leaders just don’t have good access to the kind of information they need about their quality improvement initiatives. 

Even for organizations that are working hard to assess the impact of quality, communicating impacts effectively to C-level executives is a huge challenge. The 2013 ASQ Global State of Quality report revealed that the higher people rise in an organization’s leadership, the less often they receive reports about quality metrics. Only 2% of senior executives get daily quality reports, compared to 33% of front-line staff members.  

So why do so many leaders get so few reports about their quality programs? Scattered, and inaccessible project data makes it difficult to piece together the full picture of quality initiatives and their impact in a company. Because an array of applications are often used to create charts, process maps, value stream maps, and other documents, it can be very time consuming to keep track of multiple versions of a document and keep the official project records current and accessible to all key stakeholders.

On top of the difficulty of piecing together data from multiple applications, inconsistent metrics across projects can make it impossible to evaluate results in an equivalent manner. And even when organizations try quality tracking methods, such as homegrown project databases or even full-featured PPM systems, these systems become a burden to maintain or end up not effectively supporting the needs of continuous quality improvement methods like Lean and Six Sigma.

Overcoming Limited Visibility 

Are there ways to overcome the limited visibility stakeholders have into their company’s quality initiatives? For successful strategic initiatives, it has been identified that planning and good communication are drivers for success. These drivers also positively impact successful continuous improvement projects.

1. Ensure efficiencyUtilize a complete platform for managing your continuous improvement program to reduce inefficiencies. Using one platform to track milestones, KPIs, and documents addresses redundancies of gathering key metrics from various sources needed to report on projects, saving teams hours of valuable time. Looking past the current project at hand, one platform can also make it easy to quickly replicate processes such as roadmaps and templates that were useful in previous quality initiatives.

2. Aim for consistency. Centralize your storage by making all relevant documents accessible to all team members and stakeholders. As teams grow and projects become more complex, the benefit of having all team members aligned can prevent confusion and reduce the number of back and forth emails that tend to happen. 

3. Real-time visibility for all. Visibility into the progress of your quality project facilitates the daytoday management of tracking results and addressing any challenges. Utilize dashboards to provide a quick “snapshot” of your project’s progress. Cloud-based capabilities takes your dashboard to the next level—instantly communicating real-time results. 

Drive for Excellence 

For quality professionals and leaders, the challenge is to make sure that reporting on results becomes a critical step in each project and that all projects are using consistent metrics that are easily accessible. Teams that can do this will find reporting on their results a manageable taskfacilitating the needed visibility to all key stakeholders that’s necessary for leadership buy-in.  

 

5 Tips to Make Process Improvements Stick!

For a process improvement practitioner, finishing the Control Phase of the DMAIC process is your ticket to move on to your next project. You’ve done an excellent job leading the project team because they identified root causes, developed and implemented solutions to resolve those root causes, put a control plan in place and transitioned the process back to the Process Owner. Soon, however, you learn that the process has reverted to its original state.

I’ve often heard project leaders lament, “We worked so hard to identify and implement these solutions—why won’t they stick?”

So let’s talk about fishing for a moment, because it offers some great lessons for making process change. Remember the quote, “Give a man a fish, and you feed him for a day. Teach a man to fish, and you feed him for a lifetime?” Seems simple enough, right?  But what is involved and how long does it take to teach people to fish so they could eat for a lifetime?

The same is true for process improvements. Seems simple enough to make a change and expect it to stick. So why is it so hard?

catch a fish

The fishing analogy hits home with me. I love to go fishing and have been an avid angler since I was young. And though it’s been a while since I taught my kids how to fish, I do remember it was a complicated process. There is a lot to learn about fishing—such as what type of equipment to use, rigging the rod, baiting the hook, deciding where to fish, and learning how to cast the line.

One of the most important fishing tips I can offer a beginner is that it’s better to go fishing five times in a few weeks as opposed to five times in an entire year. Skills improve quickly with a focused effort and frequent feedback. People who spread those introductory fishing experiences out over a year wind up always starting over, and that can be frustrating. While there are people who are naturally good at fishing and catch on (pun intended) right away, they are rare. My kids needed repeated demonstrations and lots of practice, feedback and positive reinforcement before they were able to fish successfully. Once they started catching fish, their enthusiasm for fishing went through the roof!

Tips for Making Process Improvements Stick

Working with teams to implement process change is similar. Most workers require repeated demonstrations, lots of practice, written instructions, feedback and positive reinforcement before the new process changes take hold.

Here are several tips you can use to help team members be successful and implement process change more quickly. Take the time to design your solution implementation strategy and control plan with these tips in mind. Also, Companion by Minitab® contains several forms that can make implementing these tips easy.

Tip #1: Pilot the Solution in the Field

A pilot is a test of a proposed solution and is usually performed on a small scale. It’s like learning to fish from the shore before you go out on a boat in the ocean with a 4-foot swell. It is used to evaluate both the solution and the implementation of the solution to ensure the full-scale implementation is more effective. A pilot provides data about expected results and exposes issues with the implementation plan. The pilot should test both if the process meets your specifications and the customer expectations. First impressions can make or break your process improvement solution. Test the solution with a small group to work out any kinks. A smooth implementation will help the workers accept the solution at the formal rollout.   Use a form like the Pilot Scale-Up Form (Figure 1) to capture issues that need resolution prior to full implementation.

Pilot
Figure 1. Pilot Scale-Up Form

Tip #2: Implement Standard Work

Standard work is one of the most powerful but least used lean tools to maintain improved process performance. By documenting the current best practice, standardized work forms the baseline for further continuous improvement. As the standard is improved, the new standard becomes the baseline for further improvements, and so on.

Use a Standard Work Combination Chart (Figure 2) to show the manual, machine, and walking time associated with each work element. The output graphically displays the cumulative time as manual (operator controlled) time, machine time, and walk time. Looking at the combined data helps to identify the waste of excess motion and the waste of waiting.

Standard Work
Figure 2. Standard Work Combination Chart

Tip #3: Update the Procedures

A Standard Operation Procedure (SOP) is a set of instructions detailing the tasks or activities that need to take place each time the action is performed. Following the procedure ensures the task is done the same way each time. The SOP details activities so that a person new to the position will perform the task the same way as someone who has been on the job for a longer time.

When a process has changed, don’t just tell someone of the change: legitimize the change by updating the process documentation. Make sure to update any memory-jogger posters hanging on the walls, and the cheat sheets in people’s desk drawers, too. Including a document revision form such as Figure 3 in your control plan will ensure you capture a list of procedures that require updating.

Document Revision
Figure 3. Document Revision Form

Tip #4: Feedback on New Behaviors Ensures Adoption

New processes involve new behaviors on the part of the workers. Without regular feedback and positive reinforcement, new process behaviors will fade away or revert to the older, more familiar ways of doing the work. Providing periodic feedback and positive reinforcement to those using the new process is a sure-fire way to keep employees doing things right. Unfortunately, it’s easy for managers to forget to provide this feedback. Using a Process Behavior Feedback Schedule like Figure 4 below increases the chance of success for both providing the feedback and maintaining the gains.

Process BehaviorFigure 4. Process Behavior Feedback Schedule

Tip #5: Display Metrics to Reinforce the Process Improvements

Metrics play an integral and critical role in process improvement efforts by providing signs of the effectiveness and the efficiency of the process improvement itself. Posting “before and after” metrics in the work area to highlight improvements can be very motivating to the team.   Workers see their hard work paying off, as in Figure 5. It is important to keep the metric current because it will be one of the first indicators if your process starts reverting.

Before After ChartFigure 5. Before and After Analysis

When it comes to fishing and actually catching fish, practice, effective feedback, and positive reinforcement makes perfect.

The same goes for implementing process change. If you want to get past the learning curve quickly, use these tips and enjoy the benefits of an excellent process!

To access these and other continuous improvement forms, download the 30-day free trial of Companion from the Minitab website at http://www.minitab.com/products/companion/.

Understanding Qualitative, Quantitative, Attribute, Discrete, and Continuous Data Types

“Data! Data! Data! I can’t make bricks without clay.”
— Sherlock Holmes, in Arthur Conan Doyle’s The Adventure of the Copper Beeches

Whether you’re the world’s greatest detective trying to crack a case or a person trying to solve a problem at work, you’re going to need information. Facts. Data, as Sherlock Holmes says.

jujubes

But not all data is created equal, especially if you plan to analyze as part of a quality improvement project.

If you’re using Minitab Statistical Software, you can access the Assistant to guide you through your analysis step-by-step, and help identify the type of data you have.

But it’s still important to have at least a basic understanding of the different types of data, and the kinds of questions you can use them to answer.

In this post, I’ll provide a basic overview of the types of data you’re likely to encounter, and we’ll use a box of my favorite candy—Jujubes—to illustrate how we can gather these different kinds of data, and what types of analysis we might use it for.

The Two Main Flavors of Data: Qualitative and Quantitative

At the highest level, two kinds of data exist: quantitative and qualitative.

Quantitative data deals with numbers and things you can measure objectively: dimensions such as height, width, and length. Temperature and humidity. Prices. Area and volume.

Qualitative data deals with characteristics and descriptors that can’t be easily measured, but can be observed subjectively—such as smells, tastes, textures, attractiveness, and color.

Broadly speaking, when you measure something and give it a number value, you create quantitative data. When you classify or judge something, you create qualitative data. So far, so good. But this is just the highest level of data: there are also different types of quantitative and qualitative data.

Quantitative Flavors: Continuous Data and Discrete Data

There are two types of quantitative data, which is also referred to as numeric data: continuous and discreteAs a general rule, counts are discrete and measurements are continuous.

Discrete data is a count that can’t be made more precise. Typically it involves integers. For instance, the number of children (or adults, or pets) in your family is discrete data, because you are counting whole, indivisible entities: you can’t have 2.5 kids, or 1.3 pets.

Continuous data, on the other hand, could be divided and reduced to finer and finer levels. For example, you can measure the height of your kids at progressively more precise scales—meters, centimeters, millimeters, and beyond—so height is continuous data.

If I tally the number of individual Jujubes in a box, that number is a piece of discrete data.

a count of jujubes is discrete data

If I use a scale to measure the weight of each Jujube, or the weight of the entire box, that’s continuous data.

Continuous data can be used in many different kinds of hypothesis tests. For example, to assess the accuracy of the weight printed on the Jujubes box, we could measure 30 boxes and perform a 1-sample t-test.

Some analyses use continuous and discrete quantitative data at the same time. For instance, we could perform a regression analysis to see if the weight of Jujube boxes (continuous data) is correlated with the number of Jujubes inside (discrete data).

Qualitative Flavors: Binomial Data, Nominal Data, and Ordinal Data

When you classify or categorize something, you create Qualitative or attribute data. There are three main kinds of qualitative data.

Binary data place things in one of two mutually exclusive categories: right/wrong, true/false, or accept/reject.

Occasionally, I’ll get a box of Jujubes that contains a couple of individual pieces that are either too hard or too dry. If I went through the box and classified each piece as “Good” or “Bad,” that would be binary data. I could use this kind of data to develop a statistical model to predict how frequently I can expect to get a bad Jujube.

When collecting unordered or nominal data, we assign individual items to named categories that do not have an implicit or natural value or rank. If I went through a box of Jujubes and recorded the color of each in my worksheet, that would be nominal data.

This kind of data can be used in many different ways—for instance, I could use chi-square analysis to see if there are statistically significant differences in the amounts of each color in a box.

We also can have ordered or ordinal data, in which items are assigned to categories that do have some kind of implicit or natural order, such as “Short, Medium, or Tall.”  Another example is a survey question that asks us to rate an item on a 1 to 10 scale, with 10 being the best. This implies that 10 is better than 9, which is better than 8, and so on.

The uses for ordered data is a matter of some debate among statisticians. Everyone agrees its appropriate for creating bar charts, but beyond that the answer to the question “What should I do with my ordinal data?” is “It depends.”  Here’s a post from another blog that offers an excellent summary of the considerations involved.

Additional Resources about Data and Distributions

For more fun statistics you can do with candy, check out this article (PDF format): Statistical Concepts: What M&M’s Can Teach Us.

For a deeper exploration of the probability distributions that apply to different types of data, check out my colleague Jim Frost’s posts about understanding and using discrete distributions and how to identify the distribution of your data.

Improving Cash Flow and Cutting Costs at Bank Branch Offices

Every day, thousands of people withdraw extra cash for daily expenses. Each transaction may be small, but the total amount of cash dispersed over hundreds or thousands of daily transactions can be very high. But every bank branch has a fixed cash flow, which must be set without knowing what each customer will need on a given day. This creates a challenge for financial entities. Customers expect their local bank office to have adequate cash on hand, so how can a bank confidently ensure each branch has enough funds to handle transactions without keeping too much in reserve?

Grupo MutualA quality project team led by Jean Carlos Zamora and Francisco Aguilar tackled that problem at Grupo Mutual, a financial entity in Costa Rica.

When the project began, each of Grupo Mutual’s 55 branches kept additional cash in a vault to avoid having insufficient funds. But without a clear understanding of daily needs, some branches often ran out of cash anyway, while others had significant unused reserves.

When a branch ran short, it created high costs for the company and gave customers three undesirable options: receive the funds as an electronic transfer, wait 1–3 days for consignment, or travel to the main branch to withdraw their cash. Having the right amount of cash in each branch vault would reduce costs and maintain customer satisfaction.

Using Minitab Statistical Software and Lean Six Sigma methods, the team set out to determine the optimal amount of currency to store at each branch to avoid both a negative cash flow and idle funds. The team followed the five-phase DMAIC (Define, Measure, Analyze, Improve, and Control) method. In the Define phase, they set the goal: creating an efficient process that transferred cash from idle vaults to branches that needed it most.

In the Measure phase, the team analyzed two years’ worth of cash-flow data from the 55 branches. “Managing the databases and analyzing about 2,000 data points from each of the 55 branches was our biggest challenge,” says Jean-Carlos Zamora Mora, project leader and improvement specialist at Grupo Mutual. “Minitab played a very important part in addressing this issue. It reduced the analysis time by helping us identify where to focus our efforts to improve our process.”

The Analyze phase began with an analysis of variance (ANOVA) for to explore how the banks’ cash flow varied per month. They used Minitab to identify which months were different from one another, and grouped similar months together to streamline the analysis.

The team next used control charts to graph the data over time and assess whether or not the process was stable, in preparation for conducting capability analysis. To choose the right control chart and create comprehensive summaries of the results, the team used the Minitab Assistant.

grupo mutual i-mr chart

The team then performed a capability analysis of each group’s current cash flow to determine whether customer transactions matched the services provided, and establish the percentage of cash used at each branch.

grupo mutual capability analysis

The analysis revealed that, in total, the vaults contained more than the necessary funds each branch needed to operate effectively, but excessive circulation of the money caused some to overdraw their vaults while others stored cash that was not utilized.

“We found a positive cash balance at 95% of the branches,” says Zamora Mora. “The analysis showed the cash on hand to meet customer needs exceeded the requirements by over 200%, so we suddenly had lots of money to invest.”

The analysis gave the team the confidence to move forward with the Improve phase: implementing real-time control charts that enabled management to check each branch’s cash balance throughout the day. Managers could now quickly move cash from branches with excess cash to those needing additional funds, and make more strategic cash flow decisions.

The team found that being able to answer objections with data helped secure buy-in from skeptical stakeholders. “Throughout this project, we encountered questions and situations that could have jeopardized our team’s credibility and our likelihood of success,” recalls Zamora Mora. “But the accuracy and reliability of our data analysis with Minitab was overpowering.”

The changes made during the project increased cash usage by 40% and slashed remittance costs by 60%.The new process also cut insurance costs and shrank risks associated with storing and transporting cash. Overall, the project increased revenue by $1.1 million.

To read a more detailed account of this project, click here.

%d bloggers like this: