Knowledge Space

Home » Best Practices

Category Archives: Best Practices

Ensuring Successful New Products and Services with QFD

By Hans Hjort

The House of Quality from Quality Function Deployment | Download Scientific  Diagram

When was the last time you reached into the refrigerator for a King Cola, 7Up Gold or Pepsi Blue? Remember pizza at McDonald’s or Heinz chocolate-flavored French fries? From the Ford Edsel to the XFL, history is littered with costly product and service failures. While failure can occur for any number of reasons (e.g., poor quality, excessive cost, late to market, superior competition, etc.), many times it results from an inability to provide better value for the customer. Organizations can lessen their chance of failure by focusing on several key areas where mistakes are likely to occur.

Start at the Beginning

An organization focused on increasing revenue may immediately jump into developing a new product or seek a new market for an existing product. The development is rushed into motion as teams are assembled, market data is collected, designs chosen and prototypes built. The result can produce a poorly planned product that does not coincide with the business strategy, differentiate itself from competitors or deliver value to the customer. Instead, the effort can waste money and resources.

The initial steps to develop a new product should include determining if an opportunity exists to provide better value than is currently available from an existing product. The new product should also fit with the organization’s business plan. Next, the organization should gain a thorough understanding of the market and its characteristics. This will help determine the expected profitability of the new product before expenses associated with engineering, production and marketing are incurred.

Unfortunately some organizations shortcut these critical steps, precluding them from effectively capturing and understand the voice of the customer, prioritizing customer requirements, determining trade-offs when requirements conflict (i.e., lightweight, yet sturdy) and translating vague requirements (i.e., easy to use) into specific targets.

Utilize Best Practices

A relatively simple and inexpensive process has emerged to bring structure, organization, weights and measures to the decision making process. Quality function deployment (QFD) is employed throughout a growing number of product development and service industries to guide the planning process. QFD is largely credited as a key force behind the radical transformation of the Japanese automotive industry in the 1980s.

The QFD chart organizes and assigns weights to desired performance parameters allowing organizations to clearly see the trade-offs and compromises that often take place when deciding what features to include in a new product.

Once the performance parameters are defined the organization is poised to set specific targets. This process includes consideration of many factors such as product strategy, technical competitive assessments, development costs, and investment risk. At the end of this activity, the organization can generate new concepts that best meet key customer and business requirements.

To produce successful products, it is essential that the entire organization share and effectively communicate the role of the customer. The product team should collectively own the strategy for addressing customer needs, applying technical know-how and resources and applying a shared understanding to evaluate and select the best solutions. QFD serves this purpose and is most effective when applied to three types of activities – planning, evaluation and deployment.

The Planning Matrix: The starting point for a planning matrix is a crisp definition of the customer segment. The objective is not simply to develop performance parameters and targets, but to enable the organization to form a strategy for approaching the customer. The combination of this information drives the determination of significance for each performance parameter and identifies which parameters are critical for product success. The critical few parameters form the content of the design scorecard to monitor success.

The Evaluation Matrix: Before each product solution is accepted, it must pass a filter of set requirements such as industry regulations and basic functionality. This process identifies solutions that provide the desired competitive advantage without violating any expectations. The evaluation matrix includes a description of the expected technology needs for the new product. Organizations should carefully consider the impact of the technology as it may require a time schedule that misses the window of opportunity to match customer requirements. If too much time passes between collection of the voice of the client and implementation, the client requirements may have changed.

The Deployment Matrix: The deployment matrix identifies which subsystems are involved in delivering specific targets and to what degree they are involved. The matrix provides visibility to the connectivity of key deliverables derived from customer needs.

Conclusion: A Tool for Success

The benefits of QFD are numerous. Employing the QFD process aligns team members and management by providing visibility and buy-in at each step before moving forward with the project. It enhances management support by tying project decisions to strategic direction and prevents teams from operating in a vacuum since their activities are tied to the enterprise planning effort. In addition, QFD enhances the effectiveness of Six Sigma by providing clear visibility to critical parameters and maintaining a connection with the initial market strategy at all levels of the development effort.

Traditional product planning starts with analyzing the performance of an existing product and improving its features. The QFD tool can play a key role in transforming products to meet continually changing customer needs.

Quality Function Deployment for Competitive Advantage

By Charene Ross Clowney

Ensuring Successful New Products and Services with QFD

In today’s business environment, companies cannot just assume they know what customers want – they must know for sure. And once they know what customers want, businesses must then provide products and services to meet and exceed customers’ desires. Business leaders have struggled for years to meet this challenge. Having the ability to truly listen to the voice of the customer (VOC) and respond to it appropriately is one good definition of a successful business, a business with a competitive advantage.

Companies which use Six Sigma employ the voice of the customer – internal and external customers – as a key element in implementing their business strategies. So important is VOC data, that no Six Sigma project should proceed without first ensuring it is real, factual, relevant and correlates with the goals of the business. There is a useful and structured tool that helps to translate both spoken and unspoken customer requirements into key business deliverables. This tool is quality function deployment (QFD).

Focusing on ‘Positive Quality’

Many quality tools focus on “negative quality” – the things that disappoint the customer. One of the key distinctions about QFD is it focuses on “positive quality” – things that delight a customer. It looks at the items that please the customer and expands upon them. QFD is useful for cross-functional teams which have to agree on what is important.

QFD is useful in a number of different scenarios. Some examples are when:

  • A business knows the customers’ requirements but does not have adequate internal measurements relative to the requirements.
  • The internal processes and practices of a business cannot meet the customers’ requirements.
  • A large investment is required for a new product or service.
  • There is a lack of agreement within a business organization on how to proceed in delivering customer requirements.
  • There are competing alternatives for market segments.

QFD Around for Nearly 40 Years

QFD is not something new, but a tool that has been in existence for quite some time. Japanese professors Yoji Akao and Shigeru Mizuno developed it in the late 1960s. Their goal was to develop a tool that would design customer satisfaction into a product prior to being manufactured. Most other quality control methods of the time focused on fixing manufacturing problems after the fact.

QFD was first introduced to America and Europe in 1983. American automotive manufacturers, Ford Motor Company and General Motors Corporation soon adopted it. Later, other American companies such as General Electric, IBM and AT&T started using this tool and reaping the benefits associated with it. QFD has been successfully used in all types of industries and business functions with great success. For instance, it has been used in sales organizations to improve their top line growth.

Figure 1: QFD Matrix (House of Quality)
Figure 1: QFD Matrix (House of Quality)
Figure 2: The Four Houses of Quality
Figure 2: The Four Houses of Quality

How does a company apply the methodology of QFD? The most important step in doing a QFD is to properly select the team. The size of the team is not as important as the quality of the team members. The team should be cross-functional and should consist of all of the necessary stakeholders crucial to the team’s success. In addition, it is important to have the customer participate in the team. In doing so, the company will ensure that the customer’s needs and wants are clearly understood and addressed. The QFD process tends to be dynamic in nature. Hence it is wise to consider changing the team members as the company cascades through the four different houses of the QFD process.

Completing the QFD Process

The QFD process typically consists of four steps:

First House of Quality – House 1 is the customer house. In the customer house, the primary goal is to translate the voice of the customer into unambiguous and clear language. A business must understand what measurements the customer is using to determine if it has met their requirements. Next, the company must identify it’s internal metrics which determine if it has met the customer requirements.

Key elements that are critical to completing the first house are:

  1. Customers’ needs.
  2. Measurable characteristics of the customers’ needs.
  3. The relationship between items 1 and 2 measured in high, medium or low.
  4. An understanding of how the company compares to competitors (from the customers’ perspective).
  5. Competitive benchmarking.
  6. Preliminary measurement targets that will meet the customers’ requirements.

Once the company has identified the key elements above, it can perform a correlation between the measurable characteristics of the customers’ needs and their relative strengths. Finally, the company should analyze this first house to determine what improvements can be made.

Second House of Quality – House 2 is the company’s house. This house is typically constructed during the Measure and Analyze phases. The goal of completing the second house is to determine specific action items that the company can take to meet the requirements of the customer.

Third House of Quality – House 3 is the process house and is typically constructed during the Analyze phase. The goal of completing the third house is to determine which processes (that have data) can be used to meet the customers’ needs. It is possible that the process does not exist, so it may need to be developed.

Fourth House of Quality – House 4, the process control house, is typically constructed during the control phase. The purpose of constructing this house is to identify the control variables that are being used to meet the customers’ needs.

It is not necessary to construct all four houses every time that a QFD is performed. Judgment is needed to determine which houses are needed.

Conclusion: Helping Satisfy the Customer

So why should a company use QFD? It should be used because it is aimed at satisfying the customer throughout the whole business process from product/service development to delivery. It helps organizations reach agreement on measurement systems and performance specifications that will meet customer requirements. It is designed to improve a company’s strategic competitiveness. It also prioritizes the steps that a business must take in order to satisfy the spoken and unspoken requirements of the customer.

Improving Business Efficiency with Robotic Process Automation

Gerald Taylor MBA

What I hope to achieve in this article is to get you to consider one very important question, “Is your organization truly getting the most productive use of its employee’s time and talent?”  If I can get you to think in that direction or ponder this notion, then this article has achieved its objective.

This article features how you can improve business efficiency and scale for growth with a relatively new technology called Robotic Process Automation (RPA).  In it, I will describe what RPA is and provide an example so you can leave with a good understanding of how the tech works.  I will also demonstrate how RPA is combined with lean management to eliminate waste and re-engineer a more cost effective and productive future state. Finally, I will illustrate method you can use evaluate its potential in your organization.

What is Robotic Process Automation (RPA)?

Robotic Process Automation is an inexpensive software-based technology.  The programmed bots in RPA work on the desk top by interacting with your applications and technology platforms performing human tasks at a rate 10 – 15 times faster than a person. RPA performs such tasks as re-keying data, logging into applications, moving files and folders, copying and pasting and much more!  RPA bots are capable of performing most human-computer interactions to carry out an extra-ordinary number of error-free tasks. In fact, if you have employees serving as a quick fix to interoperability; meaning they are taking data from an old legacy system and inputting it in to a CRM like Sales Force or taking data from a main frame and inputting it into another application, RPA is perfect for these activities….and it does so with zero errors or zero defects!  Both public and private sector organizations find value in RPA as a solution for streamlining and automating repetitive, low added value work. It is also a very attractive alternative to lengthy system overhauls and transformations. But..whereas a picture is worth a thousand words, I believe a quick 2 minute media example can better explain what RPA is and its usefulness. 

Growth in Process Automation

According to our research group, TPMG Analytics, growth in interest of RPA has been extraordinary. Since 2017, interest in RPA has grown at a average weekly rate of 27%.  RPA is the first step into an emerging industry of artificial intelligence and machine learning and is expected to grow at a compounded annual growth rate of 36% over the next 3 years. 

Humans working side by side with robots is no longer something of science-fiction. Before we know it, RPA will be to routine administrative tasks what robots are now in high tech manufacturing. Automation algorithms can now be designed to ruthlessly satisfy larger and larger ranges of tasks and the unbiased decision making of machine learning represents a competitive advantage over human operators. Over the next 5 to 10 years productivity will be explosive, and people will be freed to work solely on higher priority, value generating tasks. According to a recent Oxford University Study, The Future of Employment, over this decade, 47 percent of total US employment will likely be automated or digitized.

Is RPA a Good Fit for Your Organization? 

The cost of sub-optimized workers imprisoned in low-value added tasks has been estimated at 30% percent of operating cost.  It is a hidden cost.  RPA is well suited for high volume processes with the potential for high human error rates and where human beings are subject to the law of diminishing marginal returns.  The fact that one minute of work from RPA translates into15 minutes of human activity means employees can be released from the “prison of the mundane” to work on higher priority tasks.

Tremendous success stories are common with the technology. RPA enhanced a banks ATM dispute resolution by reducing the turnaround time from 48 hours to 2 hours. An insurance company was able to reduce its document processing time from 16 minutes to three…while improving overall productivity of processing by 87.5%.  A leading healthcare system was able to reduce its resource cost by 50% while at the same time improving its quick verification response time by more than 70%. And a financial tech company was able to reduce its data inspection and verification time by more than 83% while being able to redeploy 57% of their complement to open job requisitions and higher priority tasks. The healthcare industry is showing incredible potential for automation. Let me re-iterate, these types of success stories are common with the technology. Which leads us to our final question.

How do I now if RPA is a good fit for my organization? This is where the practice of lean management and process engineering comes into the picture. Your first step is to identify the process you believe to be a candidate for automation.  Your next step is to conduct a waste walk; a direct observation of the work as it is done along with a series of individual interviews to construct a straw model of the process.  After creating the straw model, you want to conduct a red flag analysis to identify inherent weaknesses that create a drag on productivity.  You will find repeating quality and accuracy checks, manual tasks ripe for automation, duplication of effort, collection of non relevant data, and blatant mismatches between job need and employee skill. Afterwards, you want to capture and record legitimate opportunities to automate in time and motion studies via Zoom or WebEx.  It is a very simple activity where you can calculate minutes per task, cost per minute, cost per task and multiply the result by the volume of work to generate a cost benefit analysis  Reviewing the related financial models should provide you with a proof positive or negative picture of an automation impact.

Performing a Cost/Benefit Analysis

What is your Improvement Priority?

Is your organization truly getting the most productive use of its employee’s time and talent? In which one of these areas are you personally convinced there is room for improvement in your company: scaling for growth, productivity improvement, cost effectiveness, or velocity? If you are curious, TPMG OpEx can not only help you answer this question, but can also shepard you through a no risk/no cost discovery process. We can partner with you to identify a job function and set up a complimentary proof of concept RPA bot. As an outcome of the discovery process you can: 1. benefit from a free cost/benefit analysis, 2. experience the value of RPA in your operation, and 3. discover if RPA is a good fit for your organization.

Contact us today! TPMG OpEx – Operational Excellence

Gerald Taylor, MBA is the Managing Director of TPMG Consulting and a Lean Six Sigma Master Black Belt

What is a good Net Promoter Score? And how does it vary across industries?

Jon Gitlin – SurveyMonkey

What a good Net Promoter Score looks like

According to our global benchmark data, which accounts for the NPS of more than 150,000 organizations, the average score is +32. 

SurveyMonkey Global Benchmark NPS and source information.

Here’s a closer look at the global benchmark numbers:

  • The lower quartile of organizations (or the bottom 25% of performers) have an NPS of 0 or lower.
  • The median NPS is +44. (Half of organizations have an NPS below this score, and the other half have a score that’s higher.)
  • The upper quartile of organizations (or the top 25% of performers) have an NPS of +72 or higher.

Comparing yourself to all of the other organizations isn’t always the best representation of how you’re doing, since the customer experience can vary (a lot!) by industry. For example, according to the American Customer Satisfaction Index, subscription television service providers offer a significantly worse customer experience than internet retail businesses.  

So what is a good Net Promoter Score for organizations in your space? Here’s a breakdown across 3 common categories: professional services (legal, financial, etc.), technology (telecommunications, computer manufacturers, etc.), and consumer goods and services (retailers, restaurants, etc.):

IndustryProfessional servicesTechnology companiesConsumer goods and services
Average NPS+43+35+43
Median NPS+50+40+50
Top quartile+73 (or higher)+64 (or higher)+72 (or higher)
Bottom quartile+19 (or lower)+11 (or lower)

As you can see, organizations categorized as professional services and consumer goods and services tend to deliver a similar customer experience—minus subtle differences in their top and bottom performers–but technology companies are slightly behind in every NPS calculation.

Whether you need to catch up to your industry’s average NPS or keep a leading position, there are several ways to raise your score.

3 ways to improve your Net Promoter Score

1. Develop a systematic process for tracking your NPS and reacting to it.

The customer experience is constantly evolving. If you can keep your finger on the pulse of your customer sentiment and take steps toward addressing their feedback quickly, you’ll be more likely to have loyal, happy customers. 

Learn how surveys can help you track—and act on—your NPS by reading our ultimate guide to running a customer feedback program.”

2. Give the entire team a chance to engage with customers.  

Whether your colleagues know it or not, their work can influence the customer experience. The better they understand their impact, the more likely they are to tailor their work to best benefit customers—and your NPS. 

You can empower your team to learn from customers by adopting customer interaction reports. They involve asking employees to have a conversation with a customer (as short as 5 minutes) and then fill out a survey to summarize the conversation. Sharing these results on a platform any employee can access can inspire the team and give them insight into what customers care about.

3. Invest in your customer-facing employees.

Every customer interaction shapes the client’s perception of your organization. In fact, roughly a third of customers, on average, plan to switch to an alternative company after a single case of poor customer service.

Prevent your organization from losing customers by building a first-class customer-facing team. Invest in trainings and product/service-related resources they can refer to in order to answer customer questions as quickly and effectively as possible. 

NPS®, Net Promoter® & Net Promoter® Score are registered trademarks of Satmetrix Systems, Inc., Bain & Company and Fred Reichheld

Research by Adobe shows that 8% of repeat customers are responsible for 41% of total online revenue in the U.S., 26% of total online revenue in Europe and 16% of total online revenue in the U.K.

Customer retention, loyalty, or willingness to recommend.  In which one of these areas are you personally convinced there is room for improvement?

Contact us today and arrange a complimentary voice of the customer business case analysis.

TPMG CX – Translating the Voice of the Customer into Revenue Growth!

What is the Net Promoter Score?

SurveyMonkey

The Net Promoter Score is the world’s leading metric for measuring customer satisfaction and loyalty. It goes beyond measuring how satisfied a customer is with a company; the Net Promoter Score system is designed to gauge their willingness to recommend it to others.

Now that you know what the Net Promoter Score (NPS) is, let’s review how to calculate it.

The Net Promoter Score scale

The score comes from the NPS question, which is:

“On a scale of 0 to 10, how likely is it that you would recommend our organization to a friend or colleague?”

Based on the number a customer chooses, they’re classified into one of the following categories: “Detractors,” “Passives,” and “Promoters.”

Score breakdowns:

  • 0 – 6: Detractors
  • 7 – 8: Passives
  • 9-10: Promoters

You can think of the NPS system as similar to a four-star system on an online review, but the NPS scale gives you a broader way (and a more accurate method) to measure customer’s opinions.

How to calculate your company’s Net Promoter Score

Let’s say you’ve sent out an online poll with the NPS question and the 0-10 scale, and you’ve received 100 responses from customers. What do you do with the results? Is it as simple as averaging the responses? Well, not quite. But it’s almost that easy.

The NPS system gives you a percentage, based on the classification that respondents fall into—from Detractors to Promoters. So to calculate the percentage, follow these steps:

  • Enter all of the survey responses into an Excel spreadsheet
  • Now, break down the responses by Detractors, Passives, and Promoters
  • Add up the total responses from each group
  • To get the percentage, take the group total and divide it by the total number of survey responses
  • Now, subtract the percentage total of Detractors from the percentage total of Promoters—this is your NPS score

Let’s break it down:

(Number of Promoters — Number of Detractors) / (Number of Respondents) x 100

Example: If you received 100 responses to your survey:

  • 10 responses were in the 0–6 range (Detractors)
  • 20 responses were in the 7–8 range (Passives)
  • 70 responses were in the 9–10 range (Promoters)

When you calculate the percentages for each group, you get 10%, 20%, and 70% respectively.

To finish up, subtract 10% (Detractors) from 70% (Promoters), which equals 60%. Since an example Net Promoter Score is always shown as just an integer and not a percentage, your NPS is simply 60. (And yes, you can have a negative NPS, as your score can range from -100 to +100.)

Performing these calculations might seem overwhelming, but it’s well worth the effort. Numerous research studies prove that the NPS system correlates with business growth. In fact, studies by the Harvard Business Review and Satmetrix have found that companies across industries earn a higher income when they improve their Net Promoter Scores.

So, if you’re looking for a more scientific way to understand your brand’s strength, the NPS is a straightforward system to use. And if you’re looking to contextualize your score, you can benchmark it against others in your industry.

  1. Looking to run a survey that uses the NPS question?
  2. Have you run a survey and looking for ways to identify the drivers of your company’s Net Promoter Score?

Contact us and and we will show you how: TPMG CX – Translating the Voice of the Customer into Revenue Growth!

6 Steps to Building a Better Workplace for Black Employees

30 Sep 2019|by Dina Gerdeman

To support black employees, business leaders must challenge biases and help employees be themselves, according to a new book co-edited by Anthony J. Mayo, Laura Morgan Roberts, and David A. Thomas
black employees

When Barack Obama was elected president in 2008, some saw it as proof that the color of one’s skin could no longer hold people back from achieving important leadership roles in the United States.

Not true, says Harvard Business School senior lecturer Anthony J. Mayo. “Obama’s election created this false illusion of a post-racial society, where many people thought we had transcended issues of race,” he says. “But that was not the case at all.”

It certainly wasn’t the experience for many of the black business executives included in the book Race, Work, and Leadership: New Perspectives on the Black Experience, co-edited by Mayo, University of Virginia Professor Laura Morgan Roberts, who is a visiting scholar at HBS, and David A. Thomas, president of Morehouse College and a former professor at HBS.

“These African American executives never reported feeling, even during the Obama years, that race was no longer relevant or that we had somehow collectively moved beyond race in the workplace,” Roberts says.

The picture that emerges from the essays in Race, Work, and Leadership echo the same message: Race not only still matters in the American workplace, but it remains a powerful barrier that prevents African Americans from ascending to leadership roles.

The data is indeed bleak. While an increasing number of African Americans are earning bachelor’s and graduate degrees, the number of black people in management and senior executive positions remains scarce and stagnant. Today, there are only three black CEOs of Fortune 500 companies, and not one of them is a woman.

What doesn’t help, the authors say, are recent incidents in the news, including the 2017 white supremacist march in Charlottesville, Virginia, and the 2018 arrest of two black men at a Philadelphia Starbucks after employees called the police to complain they were trespassing, even though they were just waiting for a business acquaintance.

“Given the racist rhetoric and vitriol in the air right now, racism is more prevalent today than we would have hoped,” says Mayo, the Thomas S. Murphy Senior Lecturer of Business Administration. “We’ve made some progress in the workplace, but we still have such a long way to go. It’s more important than ever to discuss what organizations can do about it.”

The book describes the experiences of African American workers and offers advice to black employees who seek to advance in their careers. It also provides these recommendations for companies that are intent on building diverse workplaces:

1. Encourage employees to talk about race

After two fatal police shootings of black men in 2016, Tim Ryan of PwC asked his staff to gather for a series of conversations about race. Two years later, when one of PwC’s own black employees was shot to death by an off-duty police officer, Ryan emailed his employees with a plea to keep talking.

Yet, the explicit discussion of race is considered taboo at many companies, and, more often than not, business leaders remain silent on the issue. That cloak of silence from the top tends to enfold all employees. Ellis Cose, an author of several books about race and public policy, writes that young black professionals who aspire to advance to senior leadership positions typically adopt the strategy of remaining silent about race and inequality to avoid being labeled “agitators.”

In a 2017 study by Sylvia Ann Hewlett and colleagues, 78 percent of black professionals said they have experienced discrimination or fear that they or their loved ones will, yet 38 percent felt it is never acceptable to speak about their experiences of bias at their companies.

 

black issues

All that hushing of the topic can make African American workers feel as if companies are not willing to address their concerns that their talent is being undervalued or squandered, which can leave them feeling less engaged with colleagues, less satisfied with their work, and less loyal to their companies, according to the book.

2. Help white colleagues contribute to the race conversation

Black leaders shouldn’t be the only ones talking about race, the authors say. It’s time for their white colleagues to stop pretending racial tensions don’t exist and start initiating conversations at work, even if they worry about feeling uncomfortable or saying the wrong thing.

“We can’t just rely on the small percentage of black executives who reach the top to wave the flag. That’s an unfair burden,” Mayo says. “If real systemic change is going to happen, it has to come from the white majority who often are in positions that give them greater leverage to change the environment. That being said, white employees may worry about their ability to effectively discuss race, but if they approach it with a sense of openness and learning, they can play an important role in advocating change.”

Managers must learn to create safe spaces at work to have these conversations and let employees know it’s OK to talk about incidents in the news, like police shootings of black people, by asking them, “How does that make you feel?”

“When black employees bring their full identities to work, they bring a set of stories and experiences that can be both painful and powerful, yet it can be hard for them to let their guard down and connect,” Mayo says. “So, creating the psychologically safe environment to have these conversations is important, with managers learning how to provide the proper support during these discussions.”

3. Tackle systemic inequality, starting with the corporate culture

Many organizations have created diversity and inclusion programs in an attempt to recruit and retain more minorities, but the initiatives often fall short, the authors say.

The problem: These programs tend to focus on helping black employees fit into the status-quo culture, rather than eliminating systemic inequality within their organizations. Companies should focus on managing injustice, rather than “managing blackness,” Courtney McCluney and Veronica Rabelo write in their chapter of the book.

[Read an excerpt from Race, Work, & Leadership.]

Companies can start by using data analytics to assess whether employees feel included on their teams and are treated fairly within their larger organizations. “These surveys should be broken down by demographic categories, including race and gender, to identify certain populations that have a lower engagement or sense of commitment to the organization,” Roberts suggests.

4. Keep confronting racial bias in hiring

Companies should train managers to root out racial bias from their hiring and recruitment processes. They should also invest in retaining black professionals, in part by reinforcing the message that race will not be a barrier to advancement.

“Some of the most difficult conversations about creating racially diverse organizations are getting sidelined.”

That’s especially important today, since inclusion programs have shifted in recent years toward recognizing more forms of diversity—based on gender and sexual orientation, for instance. Employers need to make sure that discussions about race aren’t getting lost as they work to make other groups feel like they belong.

“It’s good that we’re recognizing more forms of diversity,” Roberts says. “But, it seems like we’re talking more generally about belongingness now, and some of the most difficult conversations about creating racially diverse organizations are getting sidelined. We have to make sure we aren’t erasing race from the conversation.”

5. Support employees so that they can be themselves

Research shows that minorities at work feel pressure to create “facades of conformity,” suppressing some of their personal values, feeling unable to bring their whole selves to work, and believing they should nod in agreement with company values, according to the book.

Mayo says creating opportunities for people to bring their authentic selves to work boosts engagement and helps employees contribute more to the organization.

Creating a support network for workers can go a long way. Research shows that when professionals from diverse backgrounds have solid relationships with their managers and co-workers, they’re more satisfied and committed to their jobs. These relationships can grow through day-to-day work interactions, but also through informal get-togethers.

For instance, employees at one consulting company started a book club that focused on black writers and coordinated visits to African American museums and historical sites. And when American Express was looking to gain a better understanding of its African American customers, company officials tapped black employees for their insight, which helped signal that race is important, the authors say.

6. Be mindful of the “mini me” phenomenon

Managers should also check themselves when they evaluate their employees’ performance and advancement potential, taking a hard look at whether they’re choosing a “mini me” when they hand out a plum assignment or consider promotions, Roberts says.

“A lot of managers will say, ‘This guy has potential because he reminds me of myself when I was younger.’ Some people get a pass, and there’s a lower bar to being given an opportunity, while other people have a higher bar based on their identity,” she says. “So, it’s important to be race conscious when evaluating people’s potential to make sure these decisions aren’t biased.”

Once that potential is identified, managers should coach their workers, provide regular feedback, and champion them, showing them they have their backs as they learn and even make mistakes.

“With an underrepresented group, you need to have managers in your corner who are going to have some skin in the game, put themselves out there, and support you in your career, just as they would support your majority counterparts,” Mayo says. “They’re not just going to throw you into the deep end of the pool and expect you to survive on your own. Instead, they’ll stick with you to provide the support you need to succeed.”

About the Author

Dina Gerdeman is senior writer at Harvard Business School Working Knowledge. [Image: PeopleImages]

Edward Jones Adds Robotic Process Automation with Lean Six Sigma

By Brooke Holmes

Automation 1.0

Robotic process automation (RPA), commonly referred to as “bots,” is a type of software that can mimic human interactions across multiple systems to bridge gaps in processes that previously had to be handled manually. RPA software applications can be integrated with other advanced technologies such as machine learning or artificial intelligence. But at the most basic level, they act like super-macros following a detailed script to complete standardized tasks that do not require the application of judgment.

Why Combine RPA and Lean Six Sigma?

Replacing manual work with bots removes the possibility of human error, reduces rework and quality checks, while also increasing accuracy. Bots can work much faster than humans and at any hour of the day so long as the underlying systems are operational. The potential to reduce overhead costs and increase process cycle time is vast. Bots also provide enhanced controls for risk avoidance.

Bots can serve as a foot in the door to gain traction for a quality program. Senior level executives get excited by the potential of this relatively affordable technology. By incorporating a thoughtful Lean Six Sigma (LSS) process review into a company’s bot deployment strategy, quality programs will gain additional visibility and leadership support.

Effective Bot Deployment at Edward Jones

Edward Jones is a financial services firm serving more than 7 million clients in the US and Canada. Their operations division began exploring RPA in 2017 and subsequently implemented their first bot into production in November 2018. Since then, they’ve deployed 17 additional bots, yielding 15 full-time employees in capacity savings, which in turn generated more than a million dollars in cost avoidance. While still at an early stage in this journey, the operations division has developed a structured approach using LSS tools to assess process readiness for automation, minimize or remove non-value-added work steps prior to development (abandonment), and redesign the process to fully leverage the benefits of RPA.

LSS Process Review

Using a questionnaire to begin their intake process, business areas submit critical data regarding process volumes, capacity needs, system utilization and risk level. This data feeds into a prioritization matrix that allows them to decide where to focus energy and time. Once a process is identified for RPA, a member of the quality team engages the business area for a LSS process review using familiar tools such as a project charter, stakeholder analysis, SIPOC (suppliers, input, process, outputs, customers) and process maps.

After thoroughly understanding the process’s current state, the practitioner and corresponding business area redesign the process for robotics. Next, they complete an FMEA (failure means and effect analysis) and business continuity plan to ensure process risk is being adequately controlled. After this LSS process review has concluded, a broad group of experts – including robotics developers, internal audit staff, risk leaders and senior leadership from all impacted business areas – are brought together to jointly review the robotics proposal and agree on a go/no-go decision.

A critical component of this process review is thorough documentation of every step along the way. Using an Excel playbook to organize all the tools in one place enables a smooth transition as the effort moves from the quality team to the robotics development team. Then, this comprehensive documentation is retained by the business area for ongoing maintenance. Specific elements of this documentation include a systems inventory, a record of all sign-off dates and approvals and a business continuity plan for disaster recovery. Having complete documentation enables the business areas to take a proactive approach when faced with upcoming system changes or unexpected work disruptions. It also equips business areas with any data points required for routine internal or external audits.

Deployment Pitfalls to Avoid

There are some specific areas of concern when it comes to RPA.

  • Communication: Provide clarity to business areas about what RPA can and cannot do, and what processes fit best with this technology. Without an accurate understanding of the capabilities of RPA, there will be an influx of unsuitable requests for this new technology and, as a result, many disappointed business areas and wasted effort spent putting together their business case. At Edward Jones, the most common misunderstanding was regarding the lack of reading ability for the specific RPA vendor being used. While the bots can recognize characters in static fields, they are not able to interpret characters in an unstructured context. This ruled out many initial RPA requests. Additionally, while comparing RPA to macros was initially an effective way to explain the technology to business leaders that were not knowledgeable about technology development, this comparison created an unfortunate misconception that coding and implementing bots was as fast and easy as creating a macro. Business areas were not expecting development time to take four to six months for what they perceived to be a simple request.
  • Change Management: Incorporate thoughtful change management throughout the deployment at all levels of the organization. Leveraging bots will take away manual tasks being completed by employees. Some employees may welcome the automation of monotonous tasks, but others may view this technology as a threat to job security. Supervisors will need to adapt and grow their skills to include oversight of the RPA technology. Strong people leaders often don’t have the same level of competency in the technical space, and they will need to quickly increase knowledge and skill to effectively manage their automated processes. Senior/C-suite leaders will need to consider the inherent risks associated with using RPA, the infrastructure and skills needed to support an RPA program, and how to obtain the needed resources and talent.
  • Human Resources: Bots may create job redundancy, creating the potential for job loss reassignment. Engage human resources early to navigate these situations.
  • Governance: Balance senior leader involvement so they feel comfortable with automation without extra levels of required approvals that slow the development process down.
  • Don’t Force a Problem to Fit the Solution: RPA is not the right solution for every bad process. In the early phase of bot deployment, it is easy to let excitement about the new technology lead to poor choices around when to apply RPA. This leads to disappointing results that could undermine the entire bot deployment. Identify clear criteria regarding when bots are an appropriate solution and use a disciplined approach to evaluate each new process improvement opportunity. Consider non-bot solutions before a final decision is reached.
  • Vendor Approvals: Any third-party vendors must permit bots to interface with their systems. Review vendor contracts or have new contracts signed to ensure bots are legally allowed to interact with vendor systems and web sites before beginning development.
  • Resource Constraints: Set clear expectations with business areas about the work involved and resources needed to design and implement an RPA solution. The quality team and technical developers do not have the knowledge required about the specific processing steps to complete this work without a subject-matter expert from the business area being heavily involved throughout the project life cycle.
  • Results: Heavy focus on capacity savings only tells part of the story. Identify other meaningful methods of communicating value from RPA implementation, such as risk reduction, faster cycle time, improved client experience or increased accuracy.

Case Study: Automating Retirement Disbursements to Charities

An example of an RPA implementation at Edward Jones involves the process of receiving, validating and executing on client requests to send monetary donations from qualified retirement accounts to charitable organizations. Prior to implementing the bot, the Qualified Charitable Distribution (QCD) process required 11 hours of manpower each day to get through the volume of donations – and the number of requests had been doubling each month.

The process had five to 10 errors monthly due to the manual data entry required, which in turn took one to three hours of leader or senior processor time to resolve. A bot was designed and implemented that would validate the original request (quality check) and then enter the appropriate data into a computer screen to issue the check to the selected charity.

Stakeholder Analysis and SIPOC

After the project charter was created and agreed upon by the project Champion and project team, a stakeholder analysis was conducted to identify any additional individuals or business areas that were upstream or downstream of the process or might be affected by a change to the process. These parties were consulted or communicated with throughout the effort to ensure process impacts were understood and considered as the automation opportunity was identified and designed.

Next, a SIPOC matrix was created to understand all the process inputs, including systems, data files and end users. Together, the stakeholder analysis and SIPOC are essential in ensuring all critical components of the process upstream and downstream are identified early in the automation effort so no processing gaps are created during RPA development.

SIPOC Analysis: SIPOC for the QCD Automation Project
Supplier Inputs Process Outputs Customer
Client, branch team Clilent instructions, intranet form message Branch team sends form message with client instructions for QCD Unexcuted client request in the retirement department queue Retirement support team
Retirement support team Form message, client account information, IRS rules, client request Retirement associate reviews client request for QCD to confirm eligibility Validated client request Retirement support team
Retirement support team Validated client request Issue check Executed request, issued check Client, branch team
Retirement support team Client request, issued check Close client request on system Completed client request for QCD Client, branch team

Current- and Future-State Process Maps

The next step was to create detailed current- and future-state process maps. The current-state process map must include enough detail to highlight all the data sources required by the process, and where that data must be entered to move the process forward. The future-state map must incorporate all of those critical points, while also accounting for the limitations of RPA technology (inability to “read”) and the advantages of RPA (directly ingesting data files, speed and accuracy).

For the QCD process, the client verification step needed to be handled differently for RPA than in the original process. Previously, an employee was comparing client names between the original client request and the account registration referenced in the request to ensure a match. Names can be difficult for RPA to match because the technology doesn’t understand common nicknames that might be used interchangeably with legal names. For example, “Bill” and “William” would flag as a mismatch by the robotic technology, while a human processor would recognize those as referring to the same individual. To avoid large numbers of false positives from the bot flagging mismatches caused by nicknames, an alternative form of identification matching was used, in this case a social security number.

In a typical Six Sigma effort, the goal is to achieve a more streamlined future-state process map with less processing steps and fewer decision points. One key difference between process maps for an RPA effort compared to a more typical Six Sigma improvement effort is that the future-state process maps may contain more, not fewer, steps and decision points. This is normal and shows that the automation capability is being fully utilized to provide a higher level of accuracy. Since the bot processes at a speed much faster than a human can achieve, these additional quality checks do not add to the overall process cycle time. Each decision point with RPA represents a quality assurance checkpoint, allowing for the final output to have higher accuracy than the original process achieved.

Figure 1: QCD Process – Before BPA

Figure 1: QCD Process – Before BPA

Figure 2: QCD Process – After RPA

Figure 2: QCD Process – After RPA

Risk Assessment

Once the future automated state has been identified, conduct a risk assessment to understand the risks associated with the current process and how the process risks may be affected by RPA. The largest risk associated with the QCD process was the manual nature of the process and likelihood of human error. This risk was eliminated by using bots.

However, automation adds different types of risks, including system failures and coding errors. By identifying potential risks and using control reports to quickly identify and remediate issues, these risks can be effectively managed.

Business Continuity Plan

The final element of the process review is a business continuity plan, specifically focused on failure of RPA to successfully perform the programmed tasks. Consideration should be given to a failure of the bot itself but also any underlying systems that the bot needs to interact with to obtain data or execute requests. Planning should include how to perform the work if the automation is not operational for a particular timespan as well as how to identify and resolve errors made by the bot if the programming becomes corrupted.

Through this planning exercise, a critical aspect of the QCD process was identified that may have led to future bot failure had it not been remedied. Volumes for this highly seasonal process rise drastically at year end, and a single bot was unlikely to keep up with the work at this peak. Programmers were able to proactively solve this issue by diverting process volume onto three separate bots to stay on top of the surge of work during these high-volume time periods.

Results

The QCD bot was implemented in September 2019 and immediately realized 11 hours of capacity savings with no errors. The total project cycle time from the initial continuous improvement analysis, through the bot design, development, testing and implementation took seven months. Since implementing RPA on this process, 100 percent of the process has been automated with zero errors. Process risk was reduced by one point on a 10-point scale by eliminating human error from manual work steps.

During routine follow-up six months after bot implementation, the project team learned that the benefits received from the automation had grown significantly. The volume of client requests for charitable distributions had increased rapidly, so the bot was now performing work that would have taken 34 hours – or five employees – to complete each day.

Conclusion

Don’t short cut the methodology when leveraging RPA and other new technologies. Technology masks a bad process, so clean up the underlying work steps first to maximize the benefit of RPA.

Regression Analysis Tutorial and Examples

I’ve written a number of blog posts about regression analysis and I’ve collected them here to create a regression tutorial. I’ll supplement my own posts with some from my colleagues.Example of Minitab's fitted line plot

This tutorial covers many aspects of regression analysis including: choosing the type of regression analysis to use, specifying the model, interpreting the results, determining how well the model fits, making predictions, and checking the assumptions. At the end, I include examples of different types of regression analyses.

If you’re learning regression analysis right now, you might want to bookmark this tutorial!

Why Choose Regression and the Hallmarks of a Good Regression Analysis

Before we begin the regression analysis tutorial, there are several important questions to answer.

Why should we choose regression at all? What are the common mistakes that even experts make when it comes to regression analysis? And, how do you distinguish a good regression analysis from a less rigorous regression analysis? Read these posts to find out:

Tutorial: How to Choose the Correct Type of Regression Analysis

Minitab's regression menuMinitab statistical software provides a number of different types of regression analysis. Choosing the correct type depends on the characteristics of your data, as the following posts explain.

Tutorial: How to Specify Your Regression Model

Fitting a curved relationship with MinitabChoosing the correct type of regression analysis is just the first step in this regression tutorial. Next, you need to specify the model. Model specification consists of determining which predictor variables to include in the model and whether you need to model curvature and interactions between predictor variables.

Specifying a regression model is an iterative process. The interpretation and assumption verification sections of this regression tutorial show you how to confirm that you’ve specified the model correctly and how to adjust your model based on the results.

  • How to Choose the Best Regression Model: I review some common statistical methods, complications you may face, and provide some practical advice.
  • Stepwise and Best Subsets Regression: Minitab provides two automatic tools that help identify useful predictors during the exploratory stages of model building.
  • Curve Fitting with Linear and Nonlinear Regression: Sometimes your data just don’t follow a straight line and you need to fit a curved relationship.
  • Interaction effects: Michelle Paret explains interactions using Ketchup and Soy Sauce.
  • Proxy variables: Important variables can be difficult or impossible to measure but omitting them from the regression model can produce invalid results. A proxy variable is an easily measurable variable that is used in place of a difficult variable.
  • Overfitting the model: Overly complex models can produce misleading results. Learn about overfit models and how to detect and avoid them.
  • Hierarchical models: I review reasons to fit, or not fit, a hierarchical model. A hierarchical model contains all lower-order terms that comprise the higher-order terms that also appear in the model.
  • Standardizing the variables: In certain cases, standardizing the variables in your regression model can reveal statistically significant findings that you might otherwise miss.
  • Five reasons why your R-squared can be too high: If you specify the wrong regression model, or use the wrong model fitting process, the R-squared can be too high.

Tutorial: How to Interpret your Regression Results

So, you’ve chosen the correct type of regression and specified the model. Now, you want to interpret the results. The following topics in the regression tutorial show you how to interpret the results and effectively present them:

Tutorial: How to Use Regression to Make Predictions

How to predict with MinitabIn addition to determining how the response variable changes when you change the values of the predictor variables, the other key benefit of regression is the ability to make predictions. In this part of the regression tutorial, I cover how to do just this.

  • How to Predict with Minitab: A prediction guide that uses BMI to predict body fat percentage.
  • Predicted R-squared: This statistic indicates how well a regression model predicts responses for new observations rather than just the original data set.
  • Prediction intervals: See how presenting prediction intervals is better than presenting only the regression equation and predicted values.
  • Prediction intervals versus other intervals: I compare prediction intervals to confidence and tolerance intervals so you’ll know when to use each type of interval.

Tutorial: How to Check the Regression Assumptions and Fix Problems

Illustration of residualsLike any statistical test, regression analysis has assumptions that you should satisfy, or the results can be invalid. In regression analysis, the main way to check the assumptions is to assess the residual plots. The following posts in the tutorial show you how to do this and offer suggestions for how to fix problems.

  • Residual plots: What they should look like and reasons why they might not!
  • How important are normal residuals: If you have a large enough sample, nonnormal residuals may not be a problem.
  • Multicollinearity: Highly correlated predictors can be a problem, but not always!
  • Heteroscedasticity: You want the residuals to have a constant variance (homoscedasticity), but what if they don’t?
  • Box-Cox transformation: If you can’t resolve the underlying problem, Cody Steele shows how easy it can be to transform the problem away!

Examples of Different Types of Regression Analyses

The final part of the regression tutorial contains examples of the different types of regression analysis that Minitab can perform. Many of these regression examples include the data sets so you can try it yourself!

 

How to Be a Digital Platform Leader

by Martha Lagace

All of the most valuable firms in the world today are platforms, starting with Apple, Microsoft, Google and Amazon. But platforms do not evolve in predictable ways, and there is a lot that managers and entrepreneurs can learn about past, present, and future platform strategies.

To shed light on the challenges and opportunities posed by digital platforms, The Business of Platforms: Strategy in the Age of Digital Competition, Innovation, and Power is a new book by Harvard Business School Professor David B. Yoffie and coauthors Michael A. Cusumano and Annabelle Gawer.

Across seven chapters the authors explain the fundamentals of platforms, different strategies and business models, common errors, and platform battlegrounds of the future that involve competing technologies and implications for organizations. There is advice for traditional firms looking to build or join platforms, as well as for entrepreneurs and startups. The authors discuss issues of power and of managing privacy, fairness, and public trust.

Martha Lagace: What trends are you seeing around platforms?

David Yoffie: The first question one must ask is: Are platforms the dominant business model of the twenty first century? Today, the world’s largest taxi company (Uber) owns no cars; the world’s largest provider of accommodations (Airbnb) owns no real estate; and the world’s largest retailer (Alibaba) owns no inventory.

Modern platform thinking has been evolving for the past 30 years. Academic and practitioner interest in the subject was initially stimulated by the explosive growth of the Microsoft Windows operating system. The real value of Windows, we learned, was not about the product, per se, but the applications written by independent software vendors. As a result, most of the early research on platforms was focused on what we call today innovation platforms.

With the emergence of Amazon, eBay, and other firms in the late 1990s and early 2000s, it was clear that a very different kind of technology platform was emerging, which we call transaction platforms. These platforms have their antecedents going back thousands of years to bazaars, but technology has enabled platforms to be globally scalable, which had never previously been possible.

While many people lump innovation and transaction platforms together, we argue that they are very different animals. Furthermore, in the last 10 years, we have also seen a new type of platform company emerge, which we call hybrids. A hybrid is a company that has both an innovation and a transaction platform operating simultaneously within the organization. In some cases they’re deeply linked. In other cases they are simply separate pieces of an organization that operate under a corporate umbrella. For example, Google Search is a classic transaction platform that connects end users to advertisers, and the Android operating system is a classic innovation platform that enables third parties to write new applications that create value for Android phones.

“While many people lump innovation and transaction platforms together, we argue that they are very different animals.”

We argue in The Business of Platforms that hybrids like this will become more important over time. Even if you’re running a pure transaction platform today, you may see the opportunity to create open interfaces—APIs, or application programming interfaces—that enable third parties to build on your platform. Even highly focused transaction platforms like Uber have opened up APIs to enable third parties to add value. This is an inevitable trend of digitization.

Lagace: In your research, you identified 209 platform failures between 1995 and 2015 and about 45 successful firms for the same period. What are drivers of success?

Yoffie: The most important driver of success is network effects. Network effects create the opportunities to scale at incredible speed and the potential for winner-take-all or winner-take most businesses. But network effects are not enough. This is a common misconception, that any business with strong network effects will produce a winner-take-all outcome. The evidence suggests that simply isn’t true. Network effects are a necessary but not sufficient condition.

Beyond network effects, successful platforms make it hard for their users to multihome. Multihome means users can participate on multiple platforms at the same time. In the old days, it was hard to use both a Windows PC and a Mac because of incompatibilities. Whenever there are switching costs, multihoming is hard, which makes it complicated for firms or individual users to switch from one platform to another.

The next criterion we identify is relatively homogenous products and a lack of identifiable niches. The more heterogeneous the market, the more fragmented it inevitably becomes, which lowers the likelihood of a winner-take-all outcome.

Lastly, the truly successful firms in a platform world tend to have meaningful supply-side scale economies. Markets with large economies of scale generally have higher barriers to entry, and they are more likely to tip.

Lagace: Since most platform launches fail, what mistakes should managers and entrepreneurs avoid?

Yoffie: We see four common problems across the data. The number one problem is how to price the product. The vast majority of platforms require subsidies on one or both sides of the platform for some period of time. Failure to get that pricing right inevitably leads to decline. You can see examples in businesses like ridesharing. The first player in the space was a company called Sidecar that never was able to figure out the right pricing to drive market demand or attract new drivers to the platform. Sidecar ended up a casualty of both Uber and Lyft’s aggressive pricing policies.

A second common mistake is the failure to build, establish, and maintain trust. Platforms by their very nature require two unfamiliar parties to do business with each other, which means they may not know each other and have no reason to trust each other. Without trust, many transactions will never materialize. A crucial feature of any platform is enabling and creating a trust environment allowing independent parties to feel comfortable that they’re not putting their business, operation, or personal wealth at risk.

A third mistake is mistiming the market. It’s possible to be too early, which is not often the big problem, but it is more problematic to be too late. This is because of the power of network effects and the power of platforms to scale very quickly. Even if a firm has a better product, if it is too slow in developing customers on each side of the platform, it can still lose out. A great example was Microsoft’s failed efforts in smartphones. Microsoft built a very good operating system for smartphones, but it could not crack the market because its product came out five years after the iPhone and four years after Android. By then, there were already hundreds of thousands of applications written for the other operating systems. Even though Microsoft may have had the best of the three operating systems, it didn’t matter because it was simply too late.

The fourth common mistake is hubris. When firms are very successful in the early stages of a platform they often think the market has tipped and they don’t need to worry about competition and new technology. They lose their paranoia. The reality is, even in markets with strong network effects, it’s possible for competitors to overturn a leader’s advantage.

Lagace: What advice do you have for conventional firms looking to explore platforms?

Yoffie: There are three potential strategies. You can belong to an existing platform, buy a platform if time to market is critical, or build one if you want to control your ecosystem.

None of these strategies are without risk. Belonging to a platform is a way to quickly participate in a platform market. The challenge is to avoid the problem of “hold up” by the platform itself—that is, how do you prevent the platform from extracting most of the value? How do you prevent it from observing what you do and then simply copying it? Nonetheless, belonging to a platform is a way to engage in a platform market quickly at relatively low cost and learn the tools of the trade. In the book, we explore a number of “belong” strategies which have delivered excellent results.

Buying a platform is a higher risk, higher cost strategy, but it accelerates engagement in platform businesses. For some companies, it may be the best way to get the talent and culture required to operate a platform. The example we use in the book is Walmart trying to compete with Amazon. Walmart largely failed in the platform retail business for 20 years. Its acquisition of Jet.com, however, let Walmart aggressively expand the top line of its platform revenue and bring in a team that understood platforms. Although still far behind Amazon, Walmart’s acquisition of Jet.com finally turned Walmart into a serious online competitor.

Lastly, the hardest problem in responding as a conventional firm is trying to build your own platform. To be honest, very few firms have been successful. A lot have tried. It is something large, established firms under the right circumstances need to put on the agenda as an option they might pursue. In the book, we discuss General Electric’s efforts to build their Predix platform. Predix was clearly going to be a multiyear, maybe multibillion dollar investment and one that was increasingly difficult for GE to afford. But it’s a great example of recognizing the potential that platforms can create even within a very traditional, industrial business. We think the basic design of Predix was heading in the right direction, but the challenges in execution have been severe.

Lagace: A central concern of The Business of Platforms is the double-edged sword nature of platforms and its impact on business.

Yoffie: Platforms really are double-edged swords. They are some of the most valuable, efficient ways to organize commerce, and they are also a potential source of violence, disinformation, antitrust abuse, worker abuse, racism, misogyny, and the list goes on. In other words, the fundamental challenge we see in platforms is that they are vehicles for good as well as evil. And the vast majority of platforms in the last 10 years were only focused on the good and not on the potential for evil.

There are two theories about how to think about platforms today. We clearly are in one camp and not the other. One theory is that digital platforms are like the public square. Anybody should be allowed to do anything: it’s a world of free speech, and let the buyer beware.

The other theory is that these are environments that reflect the values and philosophy of the companies themselves. They are not a public square, and companies therefore have roles and responsibilities to their communities. Platforms have to maintain the efficiency on one hand and reduce the potential for evil on the other. This is a controversial position.

We write about a number of different elements of this problem. One, obviously, is the antitrust problem that has become a big issue in the current US elections. We argue that once these firms become dominant, many have engaged in bullying behavior. Some of these actions were perfectly legal when the companies were small. But these same actions became unnecessary (and in some cases illegal) when the big platforms became dominant. Nonetheless, leading platform firms have been slow to adjust to nonbullying behavior. A lot of the antitrust problems could go away if they could internally think of themselves as a dominant player, rather than continue to operate as if they were entrepreneurs trying to build the business from scratch.

“Twitter, YouTube, and Facebook … are going to have to find ways to curate, ways to say certain kinds of activities are unacceptable, and if users are not happy, they should just go to another platform.”

Probably the most controversial thing that we discuss in the book is the question of whether platforms need to be curated. Curation would try to eliminate disinformation, fake news, promotion of violence, and so on. Is it up to the platform to prevent that activity from occurring? Should it be done by third parties or by members of the platform community policing themselves? Facebook, for example, has about 30,000 people working on curation of the Facebook platform. The problem is there are more than 2.5 billion users. Thirty thousand people, even with the help of AI and sophisticated big data algorithms, simply are not able to catch up.

We argue that curation is going to need to become more important. There will be costs associated with it, but ultimately it’s about maintaining and establishing trust. If the platform loses the trust of one or both sides of the market, it will disappear. And ultimately it’s the platform’s responsibility to ensure that their users can trust the platform itself.

Yes, it is a philosophical change in approach. These are companies that were built as the town square, with the philosophy of “we don’t need to worry about this misuse because the ecosystem will take care of itself.” Potentially restricting free speech is anathema to many of the users as well as many people inside the company.

It is a wrenching problem. If you look at Twitter, YouTube, and Facebook, the three biggest platforms where this has become a serious problem, all three of them are struggling to address this challenge. Our argument is that they are just postponing the inevitable. They are going to have to find ways to curate, ways to say certain kinds of activities are unacceptable, and if users are not happy, they should just go to another platform. But that does mean giving up some potential revenue, which is also difficult for a public company to do.

About the Author

Martha Lagace is a writer based in the Boston area.

Corporate Innovation Increasingly Benefits from Government Research

by Michael Blanding

Nearly a third of US patents rely directly on government-funded research, says Dennis Yao. Is government too involved in supporting private sector innovation—or not enough?

Innovation has always relied, to some degree, on government support. But a recent study suggests that public funding might be even more influential than it seems.

“Nearly a third of US patents rely directly on US government funded research,” says Dennis A. Yao, Lawrence E. Fouraker Professor of Business Administration and co-head of the Strategy Unit at Harvard Business School.

Consider that between the 1950s and 1980s, Uncle Sam’s spending on research and development (R&D) rose fivefold from less than $20 billion to more than $100 billion a year, about equal to corporate R&D spending.

“If more inventions are building on federal grants, it suggests that support is becoming more important to research generally.”

Since then, corporate spending has continued to rise, while government funding has leveled off. By 2016, businesses accounted for 69 percent of all R&D spending, while the US government provided just 22.5 percent, according to the Congressional Research Service. Higher education, nonprofits, and nonfederal government entities contributed the remaining 10 percent.

“The question that naturally arises is, ‘What is the role of government in fostering innovation within this changing environment?” Yao says.

Its role remains significant, according to new research by Yao and several colleagues. Despite spending relatively less, the government funds innovations that really matter to the American economy.

The research, published in the journal Science in June, was spearheaded by University of California, Berkeley, engineering and business professor Lee Fleming, and also included University of Connecticut law professor Hillary Greene, Guan-Cheng Li of Berkeley, and Boston University strategy professor Matt Marx.

Who’s funding patents?

To get a handle on how government funding fuels innovation, the researchers took advantage of new patent data from the US Patent and Trademark Office, which recently began including patent filers’ acknowledgments in its database. Those acknowledgments usually cite funding sources, which helped researchers identify patents sponsored by government agencies, such as the National Science Foundation (NSF) or the National Institutes of Health (NIH).

“Previously, it was possible to figure out what patents came directly out of government research labs,” Yao says. “Now we can see what patents came indirectly from government support as well.”

That includes patents that were supported by federal grants and patents that relied on previous patents or publications that the government funded, directly or indirectly. (Yao and his colleagues’ research was itself funded by an NSF grant.)

By examining a database of all patents filed since 1926, the team found that the percentage of patents that involved government support has risen steadily. While government R&D spending as a percentage of gross domestic product has declined since the mid-1980s, the percentage of patents that received any government support rose from 12 percent in 1980 to a high of 30 percent by 2011 before falling slightly to 28 percent since then.

“If more inventions are building on federal grants, it suggests that support is becoming more important to research generally,” Yao says.

Yao and his colleagues also examined patent citations of previous patents to find out how past inventions influenced future innovations. Among patents granted to companies in 2010, those that benefited, directly or indirectly, from federal largesse were cited 6.33 times, on average, in the next five years, compared to 4.42 citations for patents that didn’t receive government help.

The results held even when researchers compared patents that involved similar technology, were filed around the same time, or had a similar number of inventors. In those cases, government-funded inventions received 3.39 more citations, on average, than those without.

“This result suggests that government-funded patents are more important, reinforcing the idea of government-funded innovation as a driver for the economy,” Yao says.

Who benefits from government funding?

Companies, which filed the vast majority of the patents that Yao and colleagues studied, benefited most from government money, not lone inventors or academic institutions. Startups were particularly dependent on government-funded research, relying on federally supported research for some 35 percent of all patents they filed.

“Startups were particularly dependent on government-funded research…”

While the paper doesn’t explicitly examine why government-funded patents are so important, Yao speculates that government institutions, relative to companies, tend to fund broader scientific initiatives that are more likely to lead to more novel discoveries.

Government funding is paying off

Taken as a whole, the paper provides a strong rationale for the government to continue—if not increase—its level of investment in scientific research.

“In the political environment, research funding is frequently a target for cuts,” says Yao, noting that voters are more likely to feel the immediate consequences of shrinking human services than the long-term benefits of research spending, whose outcomes might be years away. “Voters don’t get as angered about such cuts.”

The research can’t predict what would happen if the US government slashed funding significantly. But the study shows that, at least for now, government funding, dollar for dollar, fuels innovation more effectively than non-government spending.

“The data certainly suggest that the current level of government funding of research is paying off,” says Yao. “Maybe we could get even more of a benefit if we spent more.”

About the Author

Michael Blanding is a writer based in the Boston area.

<span>%d</span> bloggers like this: