Knowledge Space

Home » Posts tagged 'operational excellence'

Tag Archives: operational excellence

Robotic Process Automation – It is all about delivery!

Since 2018, the appeal for digitizing the workplace has grown dramatically.  According to Google Trends, interest in technologies like Robotic Process Automation (RPA), Machine Learning (ML) and Artificial Intelligence (AI) has grown at an average weekly rate of 23% since the beginning of 2018.  It stands to reason interest in technology has increased so dramatically in the last few years.   The fact that one minute of work from an RPA program translates into 15 minutes of human activity means employees can be released from the “prison of the mundane” to work on higher priority tasks. 

Today its not just theory.  Companies are seeing real “hard dollar” cost savings by leveraging the technology.  According to Leslie Willcocks, professor of technology, work, and globalization at the London School of Economics’ Department of Management, The major benefit we found in the 16 case studies we undertook is a return on investment that varies between 30 and as much as 200 percent in the first year!”  She has also found incredible benefits for employees, too.  “In every case we looked at, people welcomed the technology because they hated the tasks that the machines now do, and it relieved them of the rising pressure of work.”

The clear advantages of big data, artificial intelligence and machine learning are likely to change the nature of work across a wide range of industries and occupations.  According to a recent Oxford University Study, THE FUTURE OF EMPLOYMENT: 47 percent of total US employment is will likely be automated or digitized over this decade.

But not everything is coming up smelling like roses.  Despite widespread global interest and adoption of RPA, a recent conducted study by Ernst and Young has revealed that 30% to 50% of initial RPA projects fail!  While critics blame the underlying technology, this is seldom the case.  Usually, the root cause lies in the inattention to risk and internal control considerations in design and deployment of the bot technology. And that is what this article is all about, how to mitigate the risks involved in RPA deployment and generate greater yields of bot deployment success.

It’s All About Delivery

At TPMG Global© we added digital technology, like RPA, to our Lean Management and Six Sigma service offerings in 2018.  We found the technology to be a natural extension of our value proposition of delivering better, faster, and less costly value streams for our clients.  For those unfamiliar; lean management is all about clinically analyzing internal processes to find and get rid of waste.  Six Sigma, on the other hand, is all about defect reduction and standardizing the ruthless pursuit of perfection.  The natural outcome of both methods is improved productivity and lower cost (output per unit of input). 

Without the technology, a well deployed lean six sigma system helps companies improve their operating margins by 25 to 30%.  With the technology, companies experience tremendous speed and consistently shorter cycle times. Shortened cycle times and fewer defects in core value streams help companies get rid of order-to-cash backlogs, rapidly deliver to their customers and increase their recognized revenue per quarter by more than 47%.  

Above, we mentioned this article is about how to mitigate the risks involved in RPA deployment and generate greater yields of bot deployment success.  Below we have outlined 3 simple steps our obsessive and compulsive lean six sigma black belts use in the deployment of Robotic Process Automation.

Step 1 – Be Clinical

Our lean six sigma black belts think of themselves as doctors and client organizations as patients.  They unbiasedly and unemotionally view the internal operations of a company like the internal systems of the human body – inextricably linked and interdependent.  Before thinking of deploying RPA, they obsessively and compulsively analyze internal value streams from end-to-end.  They examine each step, assess data flows, evaluate the roles of people & technology, and reconcile everything to current methods and procedures.  Like super sleuths they not only search for waste and defects, but they also seek and find the agents responsible for creating both.  This diagnosis serves as the basis for the treatment they use to perform corrective action and mitigate certain types of risks like data security and compliance issues. 

Step 2 – Treat the Patient

Once our black belts examine the patient they create and standardize future state solutions that cure the patient of waste and defects.  It is then and only then they pinpoint and examine the requirements for the job functions of interest for automation. 

Step 3 – Test the Technology for Repeatability and Reproducibility

No one knows better than a lean six sigma black belt that achieving perfection is impossible.  Despite accepting this reality, TPMG black belts take confidence in the fact that by only pursuing perfection can they catch excellence.  We take this fatal attitude with us in the development and testing of bot technology.  TPMG uses a methodology called Design for Six Sigma (DFSS) to ensure functional requirements are translated into technical requirements which are programed and rigorously tested.  As the programming goes through user acceptance testing (UAT), TPMG black belts ruthlessly take developers, employees, and testers through cycles of improvement to maximize the RPA Bot’s ability to repeat and reproduce defect free work for which it is designed.  All jobs have their exceptions.  The routine cycles of repeatability and reproducibility work to minimize the impact of the risks written about above.

Step 4 – Hypercare

Once the bots are developed and ruthlessly tested, TPMG deploys the bots into production and puts them through a process called “hypercare”.  Hypercare is an anal-retentive form of bot operating monitoring where bot functions are monitored for unintended consequences.       

Is your organization interested in learning more about Robotic Process Automation?

In which one of these areas are you personally convinced there is room for improvement in your company: scaling for growth, productivity improvement, cost effectiveness, or cycle time reduction? If you are curious, TPMG Process Automation can not only help you answer this question but can also shepherd you through a no risk/no cost discovery process. We can partner with you to identify a job function and set up a complimentary proof of concept RPA bot. As an outcome of the discovery process, you can: 1. benefit from a free cost/benefit analysis, 2. demonstrate the value of RPA for your operation, and 3. discover if RPA is a good fit for your organization.

Contact TPMG Process Automation

ABOUT THE AUTHOR

Gerald Taylor is TPMG Global© Managing Director and is a Certified Lean Six Sigma Master Black Belt

Advertisement

Edward Jones Adds Robotic Process Automation with Lean Six Sigma

By Brooke Holmes

Automation 1.0

Robotic process automation (RPA), commonly referred to as “bots,” is a type of software that can mimic human interactions across multiple systems to bridge gaps in processes that previously had to be handled manually. RPA software applications can be integrated with other advanced technologies such as machine learning or artificial intelligence. But at the most basic level, they act like super-macros following a detailed script to complete standardized tasks that do not require the application of judgment.

Why Combine RPA and Lean Six Sigma?

Replacing manual work with bots removes the possibility of human error, reduces rework and quality checks, while also increasing accuracy. Bots can work much faster than humans and at any hour of the day so long as the underlying systems are operational. The potential to reduce overhead costs and increase process cycle time is vast. Bots also provide enhanced controls for risk avoidance.

Bots can serve as a foot in the door to gain traction for a quality program. Senior level executives get excited by the potential of this relatively affordable technology. By incorporating a thoughtful Lean Six Sigma (LSS) process review into a company’s bot deployment strategy, quality programs will gain additional visibility and leadership support.

Effective Bot Deployment at Edward Jones

Edward Jones is a financial services firm serving more than 7 million clients in the US and Canada. Their operations division began exploring RPA in 2017 and subsequently implemented their first bot into production in November 2018. Since then, they’ve deployed 17 additional bots, yielding 15 full-time employees in capacity savings, which in turn generated more than a million dollars in cost avoidance. While still at an early stage in this journey, the operations division has developed a structured approach using LSS tools to assess process readiness for automation, minimize or remove non-value-added work steps prior to development (abandonment), and redesign the process to fully leverage the benefits of RPA.

LSS Process Review

Using a questionnaire to begin their intake process, business areas submit critical data regarding process volumes, capacity needs, system utilization and risk level. This data feeds into a prioritization matrix that allows them to decide where to focus energy and time. Once a process is identified for RPA, a member of the quality team engages the business area for a LSS process review using familiar tools such as a project charter, stakeholder analysis, SIPOC (suppliers, input, process, outputs, customers) and process maps.

After thoroughly understanding the process’s current state, the practitioner and corresponding business area redesign the process for robotics. Next, they complete an FMEA (failure means and effect analysis) and business continuity plan to ensure process risk is being adequately controlled. After this LSS process review has concluded, a broad group of experts – including robotics developers, internal audit staff, risk leaders and senior leadership from all impacted business areas – are brought together to jointly review the robotics proposal and agree on a go/no-go decision.

A critical component of this process review is thorough documentation of every step along the way. Using an Excel playbook to organize all the tools in one place enables a smooth transition as the effort moves from the quality team to the robotics development team. Then, this comprehensive documentation is retained by the business area for ongoing maintenance. Specific elements of this documentation include a systems inventory, a record of all sign-off dates and approvals and a business continuity plan for disaster recovery. Having complete documentation enables the business areas to take a proactive approach when faced with upcoming system changes or unexpected work disruptions. It also equips business areas with any data points required for routine internal or external audits.

Deployment Pitfalls to Avoid

There are some specific areas of concern when it comes to RPA.

  • Communication: Provide clarity to business areas about what RPA can and cannot do, and what processes fit best with this technology. Without an accurate understanding of the capabilities of RPA, there will be an influx of unsuitable requests for this new technology and, as a result, many disappointed business areas and wasted effort spent putting together their business case. At Edward Jones, the most common misunderstanding was regarding the lack of reading ability for the specific RPA vendor being used. While the bots can recognize characters in static fields, they are not able to interpret characters in an unstructured context. This ruled out many initial RPA requests. Additionally, while comparing RPA to macros was initially an effective way to explain the technology to business leaders that were not knowledgeable about technology development, this comparison created an unfortunate misconception that coding and implementing bots was as fast and easy as creating a macro. Business areas were not expecting development time to take four to six months for what they perceived to be a simple request.
  • Change Management: Incorporate thoughtful change management throughout the deployment at all levels of the organization. Leveraging bots will take away manual tasks being completed by employees. Some employees may welcome the automation of monotonous tasks, but others may view this technology as a threat to job security. Supervisors will need to adapt and grow their skills to include oversight of the RPA technology. Strong people leaders often don’t have the same level of competency in the technical space, and they will need to quickly increase knowledge and skill to effectively manage their automated processes. Senior/C-suite leaders will need to consider the inherent risks associated with using RPA, the infrastructure and skills needed to support an RPA program, and how to obtain the needed resources and talent.
  • Human Resources: Bots may create job redundancy, creating the potential for job loss reassignment. Engage human resources early to navigate these situations.
  • Governance: Balance senior leader involvement so they feel comfortable with automation without extra levels of required approvals that slow the development process down.
  • Don’t Force a Problem to Fit the Solution: RPA is not the right solution for every bad process. In the early phase of bot deployment, it is easy to let excitement about the new technology lead to poor choices around when to apply RPA. This leads to disappointing results that could undermine the entire bot deployment. Identify clear criteria regarding when bots are an appropriate solution and use a disciplined approach to evaluate each new process improvement opportunity. Consider non-bot solutions before a final decision is reached.
  • Vendor Approvals: Any third-party vendors must permit bots to interface with their systems. Review vendor contracts or have new contracts signed to ensure bots are legally allowed to interact with vendor systems and web sites before beginning development.
  • Resource Constraints: Set clear expectations with business areas about the work involved and resources needed to design and implement an RPA solution. The quality team and technical developers do not have the knowledge required about the specific processing steps to complete this work without a subject-matter expert from the business area being heavily involved throughout the project life cycle.
  • Results: Heavy focus on capacity savings only tells part of the story. Identify other meaningful methods of communicating value from RPA implementation, such as risk reduction, faster cycle time, improved client experience or increased accuracy.

Case Study: Automating Retirement Disbursements to Charities

An example of an RPA implementation at Edward Jones involves the process of receiving, validating and executing on client requests to send monetary donations from qualified retirement accounts to charitable organizations. Prior to implementing the bot, the Qualified Charitable Distribution (QCD) process required 11 hours of manpower each day to get through the volume of donations – and the number of requests had been doubling each month.

The process had five to 10 errors monthly due to the manual data entry required, which in turn took one to three hours of leader or senior processor time to resolve. A bot was designed and implemented that would validate the original request (quality check) and then enter the appropriate data into a computer screen to issue the check to the selected charity.

Stakeholder Analysis and SIPOC

After the project charter was created and agreed upon by the project Champion and project team, a stakeholder analysis was conducted to identify any additional individuals or business areas that were upstream or downstream of the process or might be affected by a change to the process. These parties were consulted or communicated with throughout the effort to ensure process impacts were understood and considered as the automation opportunity was identified and designed.

Next, a SIPOC matrix was created to understand all the process inputs, including systems, data files and end users. Together, the stakeholder analysis and SIPOC are essential in ensuring all critical components of the process upstream and downstream are identified early in the automation effort so no processing gaps are created during RPA development.

SIPOC Analysis: SIPOC for the QCD Automation Project
Supplier Inputs Process Outputs Customer
Client, branch team Clilent instructions, intranet form message Branch team sends form message with client instructions for QCD Unexcuted client request in the retirement department queue Retirement support team
Retirement support team Form message, client account information, IRS rules, client request Retirement associate reviews client request for QCD to confirm eligibility Validated client request Retirement support team
Retirement support team Validated client request Issue check Executed request, issued check Client, branch team
Retirement support team Client request, issued check Close client request on system Completed client request for QCD Client, branch team

Current- and Future-State Process Maps

The next step was to create detailed current- and future-state process maps. The current-state process map must include enough detail to highlight all the data sources required by the process, and where that data must be entered to move the process forward. The future-state map must incorporate all of those critical points, while also accounting for the limitations of RPA technology (inability to “read”) and the advantages of RPA (directly ingesting data files, speed and accuracy).

For the QCD process, the client verification step needed to be handled differently for RPA than in the original process. Previously, an employee was comparing client names between the original client request and the account registration referenced in the request to ensure a match. Names can be difficult for RPA to match because the technology doesn’t understand common nicknames that might be used interchangeably with legal names. For example, “Bill” and “William” would flag as a mismatch by the robotic technology, while a human processor would recognize those as referring to the same individual. To avoid large numbers of false positives from the bot flagging mismatches caused by nicknames, an alternative form of identification matching was used, in this case a social security number.

In a typical Six Sigma effort, the goal is to achieve a more streamlined future-state process map with less processing steps and fewer decision points. One key difference between process maps for an RPA effort compared to a more typical Six Sigma improvement effort is that the future-state process maps may contain more, not fewer, steps and decision points. This is normal and shows that the automation capability is being fully utilized to provide a higher level of accuracy. Since the bot processes at a speed much faster than a human can achieve, these additional quality checks do not add to the overall process cycle time. Each decision point with RPA represents a quality assurance checkpoint, allowing for the final output to have higher accuracy than the original process achieved.

Figure 1: QCD Process – Before BPA

Figure 1: QCD Process – Before BPA

Figure 2: QCD Process – After RPA

Figure 2: QCD Process – After RPA

Risk Assessment

Once the future automated state has been identified, conduct a risk assessment to understand the risks associated with the current process and how the process risks may be affected by RPA. The largest risk associated with the QCD process was the manual nature of the process and likelihood of human error. This risk was eliminated by using bots.

However, automation adds different types of risks, including system failures and coding errors. By identifying potential risks and using control reports to quickly identify and remediate issues, these risks can be effectively managed.

Business Continuity Plan

The final element of the process review is a business continuity plan, specifically focused on failure of RPA to successfully perform the programmed tasks. Consideration should be given to a failure of the bot itself but also any underlying systems that the bot needs to interact with to obtain data or execute requests. Planning should include how to perform the work if the automation is not operational for a particular timespan as well as how to identify and resolve errors made by the bot if the programming becomes corrupted.

Through this planning exercise, a critical aspect of the QCD process was identified that may have led to future bot failure had it not been remedied. Volumes for this highly seasonal process rise drastically at year end, and a single bot was unlikely to keep up with the work at this peak. Programmers were able to proactively solve this issue by diverting process volume onto three separate bots to stay on top of the surge of work during these high-volume time periods.

Results

The QCD bot was implemented in September 2019 and immediately realized 11 hours of capacity savings with no errors. The total project cycle time from the initial continuous improvement analysis, through the bot design, development, testing and implementation took seven months. Since implementing RPA on this process, 100 percent of the process has been automated with zero errors. Process risk was reduced by one point on a 10-point scale by eliminating human error from manual work steps.

During routine follow-up six months after bot implementation, the project team learned that the benefits received from the automation had grown significantly. The volume of client requests for charitable distributions had increased rapidly, so the bot was now performing work that would have taken 34 hours – or five employees – to complete each day.

Conclusion

Don’t short cut the methodology when leveraging RPA and other new technologies. Technology masks a bad process, so clean up the underlying work steps first to maximize the benefit of RPA.

What Are the 3 Biggest Challenges to Creating and Sustaining a Culture of Continuous Improvement and Operational Excellence?

Evan McLaughlin 04 February, 2020
Opex Challenges

Robotics Process Excellence

We have combined Lean Management, Process Re-engineering and Robotics Process Automation (RPA) into a powerful approach to eliminate waste, improve productivity, and reduce the cost of doing business.    Robotics Process Excellence (RPEx) services help organizations:

  • Ensure process performance exceeds business goals.
  • Measurably increase productivity by more than 25%.
  • Enhance the quality of customer care and ease of doing business.
  • Streamline processes and measurably reduce the cost of operating.
  • Eliminate slow, tedious, time consuming, wasteful tasks with Robotic Process Automation (RPA).

Lean management is a proven method for eliminating waste and the cost that comes with it.  RPA  is an inexpensive software-based technology. It sits on top of other applications, requires no special hardware, and works well in almost any IT environment.  That’s not all,  you also get highest level of enterprise grade security.

 


Our Approach

Through a simple seven step process, TPMG delivers a low-cost solution for process improvement along with a simple and inexpensive software-based technology. It sits on top of other applications, requires no special hardware, and works well in almost any IT environment.

RPA COE Process 4.0

 


Cafeteria of Process Excellence Consulting  Services

We view our process excellence services as the backbone of our business improvement practice.   Our consultants provide first hand knowledge of best practices and a deep understanding of high performance organizations.   We deliver top-quality  services that guarantee your organization become more productive, cost effective and customer driven.  Those services include:

  • Lean Management
  • Activity Based Costing
  • Non-Value Added Analysis
  • Business Process Re-engineering
  • Operational Assessment and Redesign
  • Value Stream Mapping and Improvement
  • Rapid Improvement Events (Kaizen)
  • Business Transformation
  • KPI’s and Metrics
  • Robotic Process Automation (RPA)

 

Project Description:  What is your process improvement?

 

The Process Guy

Stream Lining Processes

I am the Process Guy.  For more than 15 years, I have used best practices like lean, six sigma, and process re-engineering to streamline processes.  To date, I have saved regional, national and global companies more than an estimated $100 million dollars.  The range of industries I have consulted in include financial services, healthcare, technology, supply chain, energy & utilities and telecom.

Cost Savings

In a recent consulting engagement, I used a combination of organizational re-design, process re-engineering and six sigma to generate an FTE savings of 57%.  This happened not only through the use of traditional streamlining methods, but also through the use of a new technology made available to the process man – Robotics Process Automation (RPA).  Robots used in RPA interact with applications to perform many mundane tasks such as re-keying data, logging into applications, moving files and folders, copying and pasting and much more.  It is a simple and inexpensive software-based technology that sits on top of other applications.  It requires no special hardware and works well in almost any IT environment.  That’s not all,  you also get highest level of enterprise grade security.

The Pitch

This may sound like an advertisement, and to an extent, this is true.  But this is more than an advertisement, this is a continual posting for those companies who seek a consultant who guarantees measurable improvements.

I have combined Lean Management, Process Re-engineering and Robotics Process Automation (RPA) into a powerful approach to eliminate waste, improve productivity, and reduce the cost of doing business.    The services I provide are guaranteed to ensure:

  • Measurable improvements in productivity by more than 25%.
  • Streamlined processes and measurable cost reductions of more than 27%.
  • A significant reduction and elimination of slow, tedious, time consuming, and wasteful tasks.

If you are a Chief Financial Officer, VP of Operations, General Manager or merely a responsible leader who wants to improve your company’s return on capital invested – contact me today!

Like I said, my services are guaranteed.  Ask me about that!

You can reach me at:  The Process Guy Email

I look forward to hearing from you!

 

 

Establishing and Sustaining a High Performance Culture: Article #2, The Mechanics of How High Performance Cultures Work

In our last article, Establishing and Sustaining a High Performance Culture, we defined what a high performance culture is and described its four key organizational functions. This article is intended to be more pragmatic in that it will outline the mechanics of cultivating a winning culture.

Before we get into the mechanics, lets talk about the real benefits senior leaders realize by focusing on culture. We will look at these benefits through the lens of a real world example – ANZ Bank. The Australia and New Zealand Banking Group Limited, commonly called ANZ, is the third largest bank by market capitalization in Australia and also the largest bank in New Zealand. In 2008, for the second year in a row, ANZ was named the most sustainable bank globally by the Dow Jones Sustainability Index. They attribute their success in the last decade to their focus on culture. In 2003, the bank implemented an initiative they described as a “unique plan of eschewing traditional growth strategies and recasting the culture of the bank to lift efficiency and earnings.” Their results were significant:

  • In two years, the share of employees having the sense that ANZ “lived its values” went from 20 to 80 percent
  • The share seeing “productivity in meetings” went from 61 to 91 percent
  • Revenue per employee increased 89 percent
  • The bank overtook its peers in total returns to shareholders and customer satisfaction

Ten years later, ANZ has sustained its results. Its profit after tax has grown at a cumulative average growth rate of 15 percent, putting it well ahead of its industry. It announced a statutory profit after tax for the half year ended 31 March 2018 of $3.32 billion up 14% and a cash profit on a continuing basis of $3.49 billion up 4% on the prior comparable period.

ANZ Chief Executive Officer Shayne Elliott attributes most, if not all if their success to its corporate culture. In their half year results of 2018 he reported, “We are now benefiting from a more focused organization with sector-leading capital and improving returns. The progress of our multi-year transformation demonstrates we have the right team in place to manage difficult conditions and deliver for our customers and our shareholders.”

Organizational Culture and Leadership

Culture and Leadership

The model illustrated above is the traditional way of viewing how organizational cultures influence the day-to-day environment in which managers and senior leaders operate. It depicts how the collective values, beliefs, norms and customs of the organization determine how decisions are made. Decisions lead to actions. Actions lead to results. The results reinforce the values, beliefs, norms and customs of the culture. In this article, I am going to propose a counter argument regarding organizational cultures that will be a departure from this model. Not only will it challenge the efficacy of the model depicted above, but it may also challenge any predisposition about business cultures you have been educated to adopt. However…after considering its potential value, you will find the proposition at least interesting enough to ponder or test.

We all know how it goes…at the beginning of the fiscal year, senior leaders convene off-site meetings. Along with producing a corporate strategy and a compliment of initiatives, senior leaders often create a set of corporate beliefs and values. They invest a significant amount of thought and time to define and communicate the meaning of these value statements. The hope is that the values will become a code of conduct by which the workforce can operate whenever they are faced with a unique circumstance or an absence of a defined policy. If the workforce operates by the values, they can consider their conduct to be in the company’s interest thereby making them good corporate citizens. This, my friends and colleagues, is a fallacy! Moreover….the posters, speeches and business workshops built around this model not only creates an illusion of corporate culture, but also explains why the expression, “Culture eats strategy for lunch,” is valid.

Organizational Culture: Which comes first the chicken or the egg?

In reality organizational culture resides in how leaders and managers make decisions and take actions. Whether their decision models are used to address corporate politics, problem-solving or improving performance; organizational culture resides in the nature of the decisions leadership makes and actions they take. For example, if leadership values a culture of collaboration but their decisions are made 1. without seeking the pooled knowledge and creativity of a team, or 2. by kicking them upstairs to be made by a select group of senior leaders, then consensus and collaboration are killed in their infancy. If, on the other hand, the senior leaders value managing-by-fact and they use a series of metrics and scorecards to analyze shifts, trends and changes in key performance indicators, then a culture of data-based decision making can thrive.

In conclusion: proposed beliefs, values, norms and customs don’t feed decision and actions. Quite the contrary. The way we make decisions, the decisions we make, the actions we take and the results we achieve, produce and sustain our business cultures.

When TPMG first began training and coaching senior leaders on high performance cultures ten years ago, this theory was met with some skepticism. But….through exhaustive experimentation and analysis, we have found the theory to hold true. We welcome any and all comments.

Whats Next?

Next in this series is The Role of Senior Leadership in Establishing a High Performance Culture.

If you would like the series delivered directly to you, feel free to contact us by clicking here!

Gerald Taylor is the Managing Director of TPMG’s Strategy and Operations Advisory Practice. His expertise includes coaching and advising senior leaders, strategy and performance improvement.

Clayton Christensen Lecture: Disruptive Innovation, Saïd Business School, University of Oxford

In the first of his lectures for Saïd Business School, Clayton Christensen explains his theory of disruption, drawing on examples of innovations occurring in the steel industry and from leading companies such as Toyota, Sony, Walmart and Indian refrigerator manufacturer, Godrej. Christensen explores how the theory can explain why the economies of America, England and Japan have stagnated. He also uses the theory to analyse how economies in Asia have achieved prosperity and to examine why countries such as Mexico are not experiencing economic growth.

Developing Key Performance Indicators

Key performance indicators (KPIs) are critical to ensuring a project team has the performance data it needs to sustain improvements. With KPIs, a team can evaluate the success of a project against its established goals.

Types of Metrics

There are two types of metrics to consider when selecting KPIs for a project: outcome metrics and process metrics.

Outcome metrics provide insight into the output, or end result, of a process. Outcome metrics typically have an associated data-lag due to time passing before the outcome of a process is known. The primary outcome metric for a project is typically identified by project teams early on in their project work. This metric for most projects can be found by answering the question, “What are you trying to accomplish?”

Process metrics provide feedback on the performance of elements of the process as it happens. It is common for process metrics to focus on the identified drivers of process performance. Process metrics can provide a preview of process performance for project teams and allow them to work proactively to address performance concerns.

Example of Selected KPIs

Consider an example of KPIs for a healthcare-focused improvement project:

  • Project: optimizing hospital patient length of stay
  • Outcome metric: hospital patient length of stay (days)
  • Process metrics: discharge time of day (hh:mm); time discharge orders signed (hh:mm); time patient education completed (hh:mm); discussion of patient at daily discharge huddle (percentage of patients)

In the example above the project has one primary outcome metric and four process metrics that compose the KPIs the team is monitoring. Well-crafted improvement project KPIs will include both outcome metrics and process metrics. Having a mix of both provides the balance of information that the team needs to successfully monitor performance and progress towards goals.

Teams should develop no more than three to six KPIs for a project. Moving beyond six metrics can dilute the effects of the data and make it more challenging to effectively communicate the progress of a project.

Questions to Help Select KPIs

Common questions coaches can use with teams to generate conversation about potential KPIs include:

  • What does success look like?
  • How will it be known if performance is trending away from goals?
  • What data would the stakeholders and sponsors be most interested in?
  • What data is available to the team?

The 3Ms: Meaningful, Measurable and Manageable

Coaches should keep the three Ms of crafting KPIs in mind when working with teams.

  1. Meaningful: KPIs should be meaningful to project stakeholders. Developing metrics that those closest to the project team find useful without getting feedback from a broader group of stakeholders can be a recipe for stakeholder disengagement. The KPIs a team selects need to resonate with the stakeholders closest to the process and the problem. The team will know it is on the right track when it has KPIs that stakeholders want to know the current status of and are discussing progress toward the project goals with their colleagues. Meaningful KPIs make excellent additions to departmental data walls for use in daily huddles and to support the efforts of leaders to get out on the floor and speak directly with employees. leader rounding.
  2. Measurable: KPIs should be easily measurable. Sometimes teams can get stuck trying to identify the “perfect” metric for measuring progress toward their project goals. In this pursuit, the team may lose sight of metric options that are already available or automatically reported. Sustainable KPIs should be relatively easy to obtain updates for. If a metric requires time-consuming auditing, or is not readily available to the project team, groups should think twice before selecting it as a KPI. Data that is challenging or time-consuming to obtain is not likely to be regularly updated and reported to stakeholders. Providing timely and accurate updates on KPI performance is an excellent way to support the sustainability of improvements and spark conversations about additional opportunities to enhance processes and reach the team’s goals.
  3. Manageable: KPIs should include metrics that are within the sphere of management control and influence for the project team. If the team selects metrics that include measuring process elements that the team has no control over, then they are not going to be measuring what matters. Teams should select KPIs that are within the scope of their project, are reflective of a successful outcome and are performance drivers for their work. Sometimes nice-to-have or might-be-interesting metrics can sneak onto the KPI list for project teams. These additional metrics are not needed; the team should focus in on the metrics that will provide accurate feedback on its performance.

Summary

Remember that successful KPIs:

  • Include a balance of outcome metrics and process metrics.
  • Total three to six metrics.
  • Are developed with the 3Ms in mind.

Crafting KPIs is an important step to guide teams through a continuous improvement process. A coach needs to keep the team focused on what success looks like and how best to measure it.

5 Tips to Make Process Improvements Stick!

For a process improvement practitioner, finishing the Control Phase of the DMAIC process is your ticket to move on to your next project. You’ve done an excellent job leading the project team because they identified root causes, developed and implemented solutions to resolve those root causes, put a control plan in place and transitioned the process back to the Process Owner. Soon, however, you learn that the process has reverted to its original state.

I’ve often heard project leaders lament, “We worked so hard to identify and implement these solutions—why won’t they stick?”

So let’s talk about fishing for a moment, because it offers some great lessons for making process change. Remember the quote, “Give a man a fish, and you feed him for a day. Teach a man to fish, and you feed him for a lifetime?” Seems simple enough, right?  But what is involved and how long does it take to teach people to fish so they could eat for a lifetime?

The same is true for process improvements. Seems simple enough to make a change and expect it to stick. So why is it so hard?

catch a fish

The fishing analogy hits home with me. I love to go fishing and have been an avid angler since I was young. And though it’s been a while since I taught my kids how to fish, I do remember it was a complicated process. There is a lot to learn about fishing—such as what type of equipment to use, rigging the rod, baiting the hook, deciding where to fish, and learning how to cast the line.

One of the most important fishing tips I can offer a beginner is that it’s better to go fishing five times in a few weeks as opposed to five times in an entire year. Skills improve quickly with a focused effort and frequent feedback. People who spread those introductory fishing experiences out over a year wind up always starting over, and that can be frustrating. While there are people who are naturally good at fishing and catch on (pun intended) right away, they are rare. My kids needed repeated demonstrations and lots of practice, feedback and positive reinforcement before they were able to fish successfully. Once they started catching fish, their enthusiasm for fishing went through the roof!

Tips for Making Process Improvements Stick

Working with teams to implement process change is similar. Most workers require repeated demonstrations, lots of practice, written instructions, feedback and positive reinforcement before the new process changes take hold.

Here are several tips you can use to help team members be successful and implement process change more quickly. Take the time to design your solution implementation strategy and control plan with these tips in mind. Also, Companion by Minitab® contains several forms that can make implementing these tips easy.

Tip #1: Pilot the Solution in the Field

A pilot is a test of a proposed solution and is usually performed on a small scale. It’s like learning to fish from the shore before you go out on a boat in the ocean with a 4-foot swell. It is used to evaluate both the solution and the implementation of the solution to ensure the full-scale implementation is more effective. A pilot provides data about expected results and exposes issues with the implementation plan. The pilot should test both if the process meets your specifications and the customer expectations. First impressions can make or break your process improvement solution. Test the solution with a small group to work out any kinks. A smooth implementation will help the workers accept the solution at the formal rollout.   Use a form like the Pilot Scale-Up Form (Figure 1) to capture issues that need resolution prior to full implementation.

Pilot
Figure 1. Pilot Scale-Up Form

Tip #2: Implement Standard Work

Standard work is one of the most powerful but least used lean tools to maintain improved process performance. By documenting the current best practice, standardized work forms the baseline for further continuous improvement. As the standard is improved, the new standard becomes the baseline for further improvements, and so on.

Use a Standard Work Combination Chart (Figure 2) to show the manual, machine, and walking time associated with each work element. The output graphically displays the cumulative time as manual (operator controlled) time, machine time, and walk time. Looking at the combined data helps to identify the waste of excess motion and the waste of waiting.

Standard Work
Figure 2. Standard Work Combination Chart

Tip #3: Update the Procedures

A Standard Operation Procedure (SOP) is a set of instructions detailing the tasks or activities that need to take place each time the action is performed. Following the procedure ensures the task is done the same way each time. The SOP details activities so that a person new to the position will perform the task the same way as someone who has been on the job for a longer time.

When a process has changed, don’t just tell someone of the change: legitimize the change by updating the process documentation. Make sure to update any memory-jogger posters hanging on the walls, and the cheat sheets in people’s desk drawers, too. Including a document revision form such as Figure 3 in your control plan will ensure you capture a list of procedures that require updating.

Document Revision
Figure 3. Document Revision Form

Tip #4: Feedback on New Behaviors Ensures Adoption

New processes involve new behaviors on the part of the workers. Without regular feedback and positive reinforcement, new process behaviors will fade away or revert to the older, more familiar ways of doing the work. Providing periodic feedback and positive reinforcement to those using the new process is a sure-fire way to keep employees doing things right. Unfortunately, it’s easy for managers to forget to provide this feedback. Using a Process Behavior Feedback Schedule like Figure 4 below increases the chance of success for both providing the feedback and maintaining the gains.

Process BehaviorFigure 4. Process Behavior Feedback Schedule

Tip #5: Display Metrics to Reinforce the Process Improvements

Metrics play an integral and critical role in process improvement efforts by providing signs of the effectiveness and the efficiency of the process improvement itself. Posting “before and after” metrics in the work area to highlight improvements can be very motivating to the team.   Workers see their hard work paying off, as in Figure 5. It is important to keep the metric current because it will be one of the first indicators if your process starts reverting.

Before After ChartFigure 5. Before and After Analysis

When it comes to fishing and actually catching fish, practice, effective feedback, and positive reinforcement makes perfect.

The same goes for implementing process change. If you want to get past the learning curve quickly, use these tips and enjoy the benefits of an excellent process!

To access these and other continuous improvement forms, download the 30-day free trial of Companion from the Minitab website at http://www.minitab.com/products/companion/.

Harvard Professor – Clayton Christensen The Process of Strategy Formulation and Implementation

In the second of his lectures for Saïd Business School, Clayton Christensen gives an insight into the ‘panda’s thumbs’ of management thinking- dated practices that hinder management decision-making and the profitability of companies. He gives the examples of managers focusing too heavily on gross margins rather than net profit and refusing to reduce their production costs as a way of avoiding disruption by smaller companies. He then gives an insight his Job to be Done theory.

%d bloggers like this: