Many individuals that find themselves managing labor programs do not have formal training in engineering concepts that are incredibly helpful to ensuring their success and the success of their company. The Toolbox looks to cover one of these concepts each month, providing useful instruction, templates, and tools that you can put into practice.

This month’s tool download: Poka-yoke Decision Support Tool

In this month’s installment of The Toolbox, we will look at a tool that process improvement experts use during the Improve phase for an existing process – where the DMAIC approach (Define, Measure, Analyze, Improve, Control) is used – or the Design phase of a new process that is to be developed – where the DMADV (Define, Measure, Analyze, Design, Verify Design) approach is used.

Have you ever observed a process in your business produce a product that you didn’t believe was up to your company standards? Have you ever witnessed a customer not receive the service that your company wants to provide to every customer? Of course you have – humans and machines alike are not perfect, and mistakes occur for many reasons. But how do we reduce defects, errors and mistakes without adding additional, non-value-added inspection and quality control labor into our operations? This is where poka-yoke can assist.

So what is poka-yoke?

Poka-yoke is the Japanese term for ‘mistake proofing,’ and the idea is that quality control is built into the process, rather than added as an additional inspection step. Would you rather add labor to your operation to identify mistakes and assign rework, or would you rather improve the process and minimize the mistakes in the first place? These solutions are often the simplest and provide benefits beyond just eliminating mistakes including decreasing fixed activity time like setup, increasing safety, reducing the learning curve for new employees and improving employee attitudes, among other benefits.

Examples of poka-yoke

So how do we deploy poka-yoke methodology into our operations? First, let’s make sure we understand a couple of simple examples of what poka-yoke is:

Manufacturing

  • Jigs and stops on machines that ensure when an operator inserts raw materials, the positioning dictates the measurements of the cut
  • Interlocking switches that prevent the machine from operating until a mechanical aspect is in a particular position. For example, a saw that will not operate until a bar is fully depressed, and the only way to fully depress the bar is to close it in such a way that ensures the operators’ hands are not in the cutting area

Retail

  • Automatic change-dispensing mechanisms that calculate and dispense the coinage due to the customer based on the transaction cost and the tender presented to the cashier
  • Point of Sale systems that prevent restricted items, like alcohol or medication, from being purchased until a form of identification is scanned and a confirmation is entered that the customer matches the photo on the identification

Healthcare

  • Sponge-counter bags utilized during surgery that have single-sponge compartments for sponges that have been used and removed from the patient. These provide a quick visual count of the sponges removed from the patient to ensure none are left in the patient
  • Automatic wheelchair brakes that engage when the chair is unoccupied. These chairs will only move when a patient is seated or a hand lever is pulled, preventing the chair from moving when vulnerable patients attempt to enter the chair

In the examples above, the operator or employee cannot proceed with the process until those steps are completed, ensuring items like measurements, safety, customer service and regulatory compliance are built right into the system. Poka-yoke can apply to your personal life as well. My favorite personal example is a tactic I employed when I began driving. In my excitement to take the car out I would often forget to grab my wallet (I had always been used to just storing my money in my pocket and the wallet was new to me). To avoid this, I started storing my car keys in the open bill-folds of the wallet – it was impossible for me to leave in my car without touching my wallet.

Activity and Service driven poka-yoke

So can poka-yoke be applied across industries and individual processes? Of course it can, but different industries and processes will apply different types of poka-yoke. The creator of the term and methodology, Shigeo Shingo, identified three types of poka-yoke when he developed the tool for activity-driven processes and they are as follows:

  1. Contact Method: Identifies product defects related to physical attributes (e.g., color, size, shape, etc.) through sensing devices (the first manufacturing example above is a contact method poka-yoke)
  2. Fixed-Value: Alerts the employee if a certain number of items, movements or processes are not made, usually in a process where the same activity is performed repeatedly (the first healthcare examples above is a fixed-value poka-yoke
  3. Motion-Step: Determines whether the prescribed steps of a process have been completed in sequence, usually in a process with several, distinct activities (the second retail example above is a motion-step poka-yoke – the employee must scan the item, then scan the ID, then press a button to confirming they believe the ID matches the individual)

So we have three types of poka-yoke that apply to activity-driven processes, but what about service applications? The customer engagement aspect of any process is also prone to mistakes, and there are poka-yokes that we can apply here as well. A misconception here though is that only the employee (often referred to as ‘Server’ in this situation) can make the mistake. That is incorrect, as the customer can (and often will, given they enter this process fewer times than the Server) make mistakes as well. Within the two classifications (Server and Customer), we also have three types of poka-yoke.

The Server poka-yokes are as follows:

  1. Task: Determines if a service task has been completed correctly (the first retail example above is a task poka-yoke)
  2. Tangible: Ensures the impression made on the customer (e.g., environment cleanliness, appearance, etc.) is in alignment with company standards (an example may be a full-length mirror placed by the timeclock in a retail environment with a poster displaying what the employee appearance should look like)
  3. Treatment: Encourages that the optimal social interaction between the Server and Customer (e.g., greeting, smile, questions, etc.) is achieved (an example may be a message on the point of sale screen that displays after the first item is scanned, notifying the employee to ask the customer if they found everything they were looking for)

The Customer poka-yokes are as follows:

  1. Resolution: Endeavors to remind customers their input is valuable to continually improving the business (an example may be providing ‘loved it’ and ‘didn’t love it’ waste baskets at a sampling station)
  2. Preparation: Attempts to fully prepare the Customer before they ever enter the service encounter (an example may be a sign communicating the wait time from that particular point in line for a single-queue style line)
  3. Encounter: Ensures that the Customer understands, remembers and/or pays attention to their roles in the service encounter or the nature of it (an example may be requiring the customer to insert a coin to utilize a shopping cart, which reminds them it is their role to return the cart to a proper location to get their coin back)

Build mistake-proofing into your processes

So now that we have identified the different poka-yokes available for activity and service based processes, how do we identify opportunities to build mistake-proofing into the process? The tool provided in this installment (download link provided at the top of this post) is a process flowchart, which is a type of Decision Support System that will help you identify the steps to follow based on a series of questions.

The first question to ask is rather obvious but often overlooked – are there recurring mistakes in this process that we must address? Remember, the entire point of poka-yoke is to simplify the process by building mistake proofing into the process. Are your customer-engagement scores consistently low? Is your shrink in a particular department higher relative to other departments? Do you routinely have noncompliant temperature readings on production food items? These are critical business processes where mistakes are occurring somewhere. Once you have identified the activity producing the undesirable mistake (perhaps through the use of a Fishbone Diagram?), you can deploy this installment’s tool to identify what type of poka-yoke is applicable.

Work through the flowchart, answering each question (i.e., Is the process activity or service based?) until it recommends a type of poka-yoke to deploy (e.g., contact method, fixed-value, treatment, etc.). Once you have determined the source of the mistake and the poka-yoke that may address it, go observe the process in question, ensuring that you understand your company’s Standard Operating Procedure (SOP) so you know exactly what you should be seeing. Watch it once. Watch it twice. Watch it until the mistake is so apparent that you can’t stand not addressing it. Put yourself in the shoes of the employee and ask yourself ‘what would I wish I had to make this easier?’ The issue may be that the employee is not following SOP. The issue may be that the cause of the mistake is the SOP itself. Regardless, poka-yoke can be applied.

Return to the tool from this installment, where numerous types of poka-yoke ideas are listed by type on the second page. If nothing else, the reference poka-yokes should serve as a source of inspiration for a poka-yoke that will address your mistake. Remember, the point of the poka-yoke is to build mistake proofing into the process. The idea should be rather simple, make the job easier, and, if not make it impossible, make it very difficult to complete the process if that mistake is still being made.

Many individuals that find themselves managing labor programs do not have formal training in engineering concepts that are incredibly helpful to ensuring their success and the success of their company. The Toolbox looks to cover one of these concepts each month, providing useful instruction, templates, and tools that you can put into practice.

This month’s tool download: Lean Six-Sigma Value Analysis Tool

In this month’s installment of The Toolbox, we will look at another tool that process improvement experts use during the Analyze phase for either an existing process – where the DMAIC approach (Define, Measure, Analyze, Improve, Control) is used– or a new process that is to be developed – where the DMADV (Define, Measure, Analyze, Design, Verify Design) approach is used. This analysis tool is known as Value Analysis, and it aims to define each step of the process from the customer’s perspective, according to whether they would perceive that step as adding value to the product. Another way to think about this is asking the question “would the customer be willing to pay for this step of the process to be performed?”

Value Analysis has long been used in manufacturing operations, but it is especially relevant in the retail space as well and will continue to grow in importance. Retailers are adding more experiential aspects within their four walls, including moving production tasks into the store, adding animation and performing demonstrative production tasks in order to draw customers into visiting their locations. While some of these additions to stores may be written off as loss-leaders, they do not necessarily have to be. This can be achieved through Value Analysis.

The first step in Value Analysis is to select a process to analyze and break it down into the logical steps necessary to complete the process. As far as process selection, you should aim to select a process that falls within the group of processes that make up the top 80 percent of your hours according to a Pareto analysis. Once you have selected the process and broken it down into the logical steps, the next task is to define each step as either Value-Added, Value-Enabling or Non-Value-Added.

Value-Added steps must meet each of the following three criteria:

  1. The step transforms the product in a way that moves it closer to the final state
  2. The step is unique and does not represent rework to correct previous steps performed incorrectly
  3. The customer cares that the step is performed and indeed would pay for it

If the step does not answer every one of those criteria in the affirmative, then it is not considered a Value-Added step, and will be classified as one of the following:

  • Value-Enabling: These are steps that most likely fail criterion three above (customer willingness to pay for the step), but must be completed for various reasons (i.e., comply with regulation, meet a business requirement, etc.)
  • Non-Value-Added: These are steps that fail one or both of the first two criteria above, representing what is typically considered waste in a process (i.e., inspection, rework, travel. etc.)

Determining whether a step is Value-Enabling or Non-Value-Added can be challenging, but you need to adhere to strict definitions of the seven wastes. Even if the product must be moved from the production area to where customers will purchase them that step is still Non-Value-Added, as the step represents transportation.

So now that we have classified each step of the process, how do we use this information? The first step is to determine if the Non-Value-Added steps can be eliminated all-together. If they cannot, then you should look for ways to reduce the time it takes to complete them. While this doesn’t eliminate the Non-Value-Added steps, it reduces their overall share of the end-to-end process. The same time-reduction analysis is then performed with the ­Value-Enabling steps, and even the Value-Added steps. But, where do we start in terms of designing the potential future state?

Depending on the Non-Value-Added tasks you are aiming to remove, this can require additional statistical analysis and process redesign. For example, if you are looking to remove an inspection step, you need to determine what the current defect rate is. If the defect rate is unacceptable, then you must collect data and analyze why the defect rate is outside of acceptable limits. If you find that the defect rate is within acceptable limits, then and only then can you consider removing that step.

If you cannot eliminate the steps, the next best thing to do is to look for ways to reduce the time that each step takes. If you already have engineered labor standards in place, you should know exactly how much time each step of your process takes. If you do not, then you will have to build an engineered labor standard through the application of either time study or pre-determined time and motion analysis. Once you have this information, we can target the steps that fall in the Non-Value-Added and Value-Enabling buckets first, zeroing in on the steps that require the most time. After that, we can analyze even the Value-Added steps to determine if efficiency opportunity exists.

The tool provided in this installment (download link provided at the top of this post) will allow you to list your steps, perform your Value Analysis, and document the current time allotted to complete each step. Once you have done this, we can begin to perform analysis on the process to determine if and how we can remove waste. What follows is a very high-level example of how you may use the tool to go about this process (example document available to download here).

wp:image {“id”:8340} –>

Figure 1 – An example of a widget production process that has been broken down into steps. Value Analysis has been completed and process times have been measured to determine how impactful each step is

In the example above, I have broken down the process into steps and performed the Value Analysis according to the directions above. I then entered the process duration for each step in the provided column (this is all conducted on the ‘Analysis Sheet’ work sheet of this month’s tool). Of importance, the entries for Process Duration can be in any unit of time, but they must be consistent. Also, the time must represent what it takes to perform the step for the same volume of product as all other steps. For example, Process Step 0007 – Transport widget to shipping may actually take 1,600 seconds, but 10 widgets are transported at a time. Therefore, I have divided that time by 10 to ensure that is the Process Duration for a single widget.

Once the data have been entered, the tool will identify whether or not the step falls within that top 80 percent of the work duration, which is called out in the Falls in Pareto? column. From here, I can start to complete the Future Plan column, utilizing both the Value Analysis designation and the overall time that the step takes to prioritize which steps of the process I will work on.

At this point, I can use a variety of process improvement tools depending on the type of step to either eliminate or reduce the time it takes. In the example above, I would focus on steps 0008, 0009 and 0010 first, followed by step 0001 most likely. Again, this depends on the specific process. I would look at these steps and determine if I can eliminate them, and if not, how I can reduce the time it takes to complete each step.

Once I used my process improvement techniques to analyze each step, I would model the new process steps and again utilize time study or pre-determined time and motion analysis to determine the potential new time to perform the step. This may require a pilot of the new process in a location to truly test both the feasibility of the new process design and its impact on the product. When I have collected the new times for each step, I would enter them in the final column which will provide insights on the additional worksheets in the tool:

wp:image {“id”:8341} –>

Figure 2 – The Metrics – Opportunity Profile work sheet displays the time currently required to complete each step in the total bar, and highlights in green the opportunity time that exists for each step

wp:image {“id”:8342} –>

wp:image {“id”:8343} –>

Figure 3 and 4 – The Metrics – VA Analysis Pie Chart work sheet displays two pie charts, showing the breakdown of the steps across the three Value Analysis categories and their current contribution to the overall time it takes to complete the process, both for the current and potential future state

Value Analysis is just one of many improvement strategies within Lean Six-Sigma. Performing this analysis can help you not only identify wastes within the process, but also improve quality of the products you are presenting to your customers. In an environment that has not undergone significant process improvement efforts, it is typical that truly Value Added work represents less than 5 percent of the overall work effort.

If performing Value Analysis increases that figure even by a few percentage points, it can greatly reduce costs, increase profitability, improve quality and have many other benefits to your operation. As companies invest more and more on labor to ensure customers visit their stores, this type of analysis may be more important than ever.

Many individuals that find themselves managing labor programs do not have formal training in engineering concepts that are incredibly helpful to ensuring their success and the success of their company. The Toolbox looks to cover one of these concepts each month, providing useful instruction, templates, and tools that you can put into practice.

This month’s tool download: Fishbone Diagram and Analysis Frameworks

In this month’s installment of The Toolbox, we are going to look at a common problem that teams encounter while managing operations – we have a process that is producing an unintended or undesirable result, but we do not know what the cause is. Whether the process is a food program in a grocer, a customer engagement program in a retail environment, a fabrication step in a manufacturing process or a material handling operation in a distribution environment, unintended or undesirable results such as increased shrink through food waste, decreased customer satisfaction scores, excessive defects or increase pick times are all too common occurrences. If your organization has clearly defined standards and regularly reports on performance against these standards, you will at least be able to identify the problem quickly. However, where do you and your team start in terms of identifying the root cause?

There are a number of industrial engineering and statistical approaches that we can utilize to perform process improvement – Lean, Six Sigma, Business Process Reengineering – but in most examples a critical early step is brainstorming a number of causes that you and your team will need to identify in order to perform proper testing and analysis to identify the root cause or causes. Enter the Fishbone Diagram.

A Fishbone Diagram (also known as an Ishikawa diagram, named after Kaoru Ishikawa who is considered one of the founding fathers of modern management) is nothing more than a simple, structured approach to brainstorming cause and effect. Again, this is not the tool that will help you identify exactly what the cause of your unintended or undesired result – it is a starting point for you and your team that will simplify how you begin your improvement process. All too often, a team will meet to discuss the problem at hand and, depending on pre-conceived notions, bias opinions or individuals’ area of expertise, the discussion will turn into a debate at the conclusion of which there is no clear direction or consensus on what the next steps will be.

Using a Fishbone Diagram, we can introduce a uniform process in which everyone knows they will get a turn to list a potential cause. The output of this tool and exercise is merely to list out and categorize all the potential causes. The diagram gets its name from the shape of the drawing. You first list the unintended or undesirable result to the right of the page in a box (the “head”), drawing a long line from the box across the page (the “backbone”). You then draw lines off the “backbone” listing categories of problems that may contribute to the result (the “bones”). Finally, you brainstorm and list potential causes on each “bone” that the team may want to explore.

Figure 1 – An example of a Fishbone Diagram examining the potential causes of a specific product’s low profitability, utilizing The 4 P’s framework on the bones to categorize potential causes

So, now that we understand how to draw our Fishbone Diagram, how do we structure our brainstorming session so that we can easily and simply categorize the potential causes, or, what goes on the bones? It most likely depends on the unintended or undesirable result that you and your team are trying to solve for. There are numerous questioning or problem-solving frameworks that you and your team can use to structure your critical thinking process. In Figure 1, we used The 4 P’s – Product, Price, Placement and Promotion – to categorize our potential causes.

Using a framework should both help the team stay on topic, but also think of potential causes that they may or may not have influence over. Depending on the type of unintended or undesirable result that is being solved for, one framework may be better than the others. Popular frameworks include Porters’s 5 Forces, The 4 C’s and The 4 P’s, but there are numerous potential examples. You and your team can even use generic categories that you feel are relevant to your business. The tool provided in this installment (download link provided at the top of this post), includes both a template to create your own Fishbone Diagrams as well as a list of references for available frameworks and when they are most appropriate.

Again, the Fishbone Diagram is just a visual root-cause analysis tool, but it is one that allows you and your team to conduct a focused approach to brainstorming causes for an unwanted problem. Challenge your team to identify all potential root causes. There is a tendency to focus only on those within your collective control. The root cause of your unwanted problem may very well be outside of the control of those involved in the exercise. In facilitating the use of the Fishbone Diagram brainstorming session, it works well for the leader to fill in the categories ahead of time to ensure you include a representative who can speak to each category.

Remember, completion of the diagram is only one step, and a very preliminary one at that, in solving the problem. Depending on the process that you are using, whether it be Six Sigma’s DMAIC process – Define, Measure, Analyze (the step you would use the Fishbone Diagram), Improve, Control – or other process improvement strategies, the Fishbone Diagram will certainly be a great tool in your toolbox, especially to get the wheels turning on the path to identifying potential root causes and designing tests to address your problems and improve your results.

Many individuals that find themselves managing labor programs do not have formal training in engineering concepts that are incredibly helpful to ensuring their success and the success of their company. The Toolbox looks to cover one of these concepts each month, providing useful instruction, templates, and tools that you can put into practice.

This month’s tool download: The Sample Size Calculator

Welcome to the first installment of a monthly Logile offering, The Toolbox. Through my years working with individuals leading workforce managing programs I have come to realize that many of them have risen through their organizations to ascend into these roles, gaining deep experience about their business, industry and customers along the way. However, in many cases these individuals never received formal training in useful concepts, tools, and approaches as part of their development that can greatly assist them in different facets of their current role. The purpose of this regular offering is to provide you with training and tools that you can put into practice right away to achieve better results in your labor management programs.

So, before we dive into the theory or the tool itself this month, let’s discuss a typical challenge posed to those working in labor management programs. Your company is considering a change to a standard operating procedure. Maybe it is introducing new technology to enhance the customer experience; a new marketing strategy, or just changing something for the sake of changing it (we have all been there). In an organization where the labor management team has been integrated into evaluating potential changes to operations (and if you haven’t, it is time that the leader of your group speaks up), you may be tasked with evaluating the impact of making this type of change. The company has set up a pilot program in a location and you have traveled to observe the new process. And now what?

If you and your organization use a predetermined time and motion system such as MOST, the answer may be simple (and if you do not, please feel free to reach out to learn about the benefits). You observe the process, write your method descriptions, develop your sequence models, and calculate the time for the overall process, later performing some form of extrapolation across the organization to determine the impact of the potential change.

However, what do you do if you do not use a predetermined time and motion system? Furthermore, systems like MOST are only useful when there is motion to study. What if you are trying to understand the impact of a change not concerned with motion, such as a machine processing time or an interaction between a customer and an associate demonstrating a new product? The answer is that you need to perform a time study.

Assuming that you understand the proper approach to designing and conducting a time study, the question still remains – how many times must you observe and measure the process with a stopwatch? The correct answer is, as many times as necessary to achieve the acceptable statistical accuracy prescribed by your organization for such data. But what does that mean?

Time study, along with many other forms of data collection, is a sampling process. What this means is that we can assume that our samples are distributed normally across our unknown population (all of the occurrences of this process) average, and unknown variance.[i] The number of samples that you will need to collect is dependent upon how large the variance, or difference is between your samples. Without diving too deep into the statistics or theory, we can utilize statistical approaches related to sample populations to arrive at the following equation for calculating the variance based on your observations:

wp:image {“align”:”center”,”id”:832} –>

With time study we are almost always dealing with a very small initial sample (we recommend close to 30 initial samples to use in this exercise). Due to this, we must use a t-distribution to estimate confidence intervals (that statistical accuracy prescribed by your organization mentioned above). This yields the following:

wp:image {“align”:”center”,”id”:833} –>

Finally, we can solve for to determine the total number of samples in addition to the initial collection that we need measure:

wp:image {“align”:”center”,”id”:834} –>

So now that we’ve concluded our statistics lesson for today, how do we actually use this information?

The first thing that you must do is set up your time study utilizing the proper methodology (i.e., document the entire process, break it down into work elements, define start and end points for each element, etc.). Once you have done that, you must collect an initial sampling of times. We recommend collecting 30 initial time samples. Once we have this data, all we need is to determine the desired accuracy and start using the provided tool.

This accuracy is expressed in the t-distribution table as Probability (P), which refers to the sum of the two tail areas (right and left) of our normal distribution. Basically, we are defining the odds that any sample falls in the main portion of our bell-shaped graph (between the tails). As we increase P, or the odds that the sample falls between and not within our tails, we increase the accuracy of our measurement. However, we also increase the number of samples that we must potentially collect to achieve this accuracy. A general best practice, and what Logile recommends, is to require an accuracy of 95 percent, or P = 0.05.

wp:image {“align”:”center”,”id”:835} –>

wp:paragraph {“align”:”center”} –>

Figure 1 – An example of a normal distribution (the bell shape) with the tails highlighted in yellow. The tails represent the portion of samples that will fall outside of our accepted accuracy. The higher the P value, the smaller the yellow areas and the higher the odds that a sample falls between those yellow areas.

So now that we’ve discussed the statistics that this process is based on, collected our initial samples, and determined our desired accuracy; let’s explore how to use the tool provided in this installment (download link provided at the top of this post).

The instructions are listed in the document, but we will review them quickly here as well. First, take your samples (in seconds) and type them into shaded cells in column B (starting in cell B4). Select the desired confidence interval in cell G13 (set by default to 95 percent). Once you have done these two things, any samples beyond acceptable control limits will be highlighted in red – delete these values. Once you have done this, your required sample size will be presented in cell G16, highlighted in green.

wp:image {“align”:”center”,”id”:836} –>

wp:paragraph {“align”:”center”} –>

Figure 2 – screen shot of this month’s tool – The Sample Size Calculator

What this tool is doing is performing the equations presented above based on every sample that you enter. Practice using the tool by inputting fabricated values and changing the Confidence selection to see how the calculations and Required Samples values change as you increase or decrease the Confidence, as well as how it changes as the variance between your values changes.

As with many processes related to workforce management like time study, there is a right way to conduct the exercise to ensure that you produce the most accurate results possible. The implications of not calculating the correct sample size are basing something like a labor standard off data that does not truly represent what is going on in your organization. For processes that occur in great volume, such as register transactions for a retailer, a poor standard based off inadequate measurement can result in either millions of dollars of additional, unnecessary annual labor costs, or not adequately staffing to handle your customer volume. Now you have one more tool in your toolbox to ensure that this is done correctly.

[i] Benjamin W. Niebel and Andris Frievalds, Methods, standards, and work design (McGraw Hill, 2003) 393.

There are several components involved in building a workforce management system capable of delivering competitive advantage to retailers. If you depict those components in simplified process order, you typically get something like this:

Simplified Workforce Management System Process

While this is a simplified depiction, it highlights the central role of forecasting as the linchpin in the WFM component chain. Unlocking the higher value proposition of all these components relies heavily on accurate forecasting to deliver the upstream and downstream potential of each of the other components.

In this post, we will discuss why that’s the case and why the new layers of benefits are exposed using artificial intelligence (AI), near real-time data exchange, machine-based learning (ML) algorithms, faster cloud-based enterprise computing, applied industrial engineering, and smart retailing. Leveraging the potential of emerging technology will unlock competitive advantage for those who invest in it.

What’s new in forecasting?

Faster computing platforms coupled with AI and ML algorithms have led to breakthroughs in forecast accuracy. The combination of better math and faster processing can out-gun older, static approaches to every step of the forecasting process. 

At a basic level, here’s the old process:

wp:image {“align”:”center”,”id”:14023} –>

Old Forecasting System Process

While older systems allowed you to change the math, it was one-size-fits-all for the strategy in use for a given week. Same data selection, same math applied to all metrics (sales, items, customers, cases, etc.) The math used for forecasting involves linear programming, also known as time-series algorithms, with averaging or trending, or both. For each event, the system looks back into history and tries to determine how these events change the forecast. If an event has a historic impact of lifting sales 4 percent, then your base forecast sales are increased by 4 percent.

When fine-tuned as best you could for an “average week,” this left you with far less accurate forecasts whenever holidays, events, seasonality, weather and promotions were significant. Even recurring events such as pay periods, EBT releases, etc. created misses due to the day-of-week repositioning of these events. It was a hard sell to get managers to “trust the system” when the system could be fairly accurate 60 percent of the time, but wildly inaccurate whenever these variables arose. Meanwhile, the volatility of promotional events, competitive activity and weather has never been more pervasive.

With a library of AI and ML algorithms, the process is more detailed and powerful. At an overview level, here’s the new process:

wp:image {“align”:”center”,”id”:14025} –>

New Forecasting System Process

Without getting too detailed, the differences stand out. Whereas the old methodology, which is still typical of most WFM forecasting solutions, relies on a single data set (last 4 weeks, last 8 weeks, etc.), the data set for the AI and ML algorithms varies by algorithm and is both informed by the events and selected appropriately for each algorithm. The data sets are also unique by store and for each metric. Events are handled after the fact in the first process, while they are fully integrated into the data set selection and adjustment process in the new approach. This layered-in approach produces better results in handling events, including transposition from day to day for those events that move from historical occurrences.

Once you do the math and apply the event adjustments, the old process is done. In the new processes, the result of each and every algorithm undergoes statistical analysis so that the most statistically reliable algorithm for that metric, for that store, for that type of week is selected. Even then, final macro analysis is applied to finalize the results. The difference is significant: it is akin to picking just one tool from your toolbox versus using and benefiting from the entire workshop of tools.

Don’t forget that learning algorithms learn. That means your tools are constantly recalibrating, resharpening and reengineering themselves based on your ongoing experience. AI empowers smart systems to get smarter, and ML enables algorithms to learn. If your current system isn’t getting smarter by learning from your history, how do the outputs get better for your stores? Accordingly, early adopters will create a leading competitive advantage.

Improved accuracy is indisputable

Find a vendor who will perform the analysis for you, and your results will be compelling. The newer approach to forecasting consistently wins. Vendors with old functionality will have you focus on higher-level forecasting like weekly sales. They find safety in big numbers that can cancel out daily variances. But the new approach always delivers better results. What’s even more telling is looking closer at daily store sales, daily department-level sales, and interval sales throughout the day. The results become ever more compelling as you look closer and closer at the scheduling impacts.

Translating higher accuracy into hours saved downstream

What’s the value of getting just $1,000 more accurate in forecasting per week? Simple store sales per hour (SPH) might initially lead you to calculate that value as 5 to 7 hours, depending on your current production rates. But consider that labor really gets planned at the daily level. So $3,000 over on one day, and $2,000 under on another start looking more like 25 to 35 hours misappropriated than 5 to 7. And, consider then that labor is most typically scheduled at the department level. The error from department to department can apply an even bigger multiplier to poorly positioned or ill-spent hours. 

How many hours might be saved or better put to proper use varies with your current forecast inaccuracy. It’s up to you to determine the value of an hour saved and the value of an hour not spent where it is needed. Even with modest improvements, the numbers roll up into a lot of hours. Some of those hours are saved, some of those hours are better spent. Some of those hours are currently causing overtime. All hours come loaded with some level of benefit expense. Be reasonable in estimating, but brace yourself if you estimate the annual number across your enterprise. The numbers can be eye-opening, and the opportunity is compelling.

It’s an often-quoted statement that “you can’t get a good schedule out of a bad forecast.” It is also very true. Hours overscheduled in one department don’t justify service poorly delivered in another. Spending too many hours in the morning and too few hours in the evening creates inconsistent service expectations, jeopardizes sales and puts customer loyalty at risk. Of course, managers can adjust during the day or week in progress, but how easily does that happen with your current WFM tools, and at what cost? Accurate forecasting reduces variability for the week in progress and allows store personnel to execute your brand, your merchandising plans, and your service standards with far less waste.

The upstream value propositions for labor standards and labor modeling

If the downstream value is clear, how does forecasting improve the upstream process for labor standards and labor modeling? Many companies working to build a WFM platform for competitive advantage have work to do in systems, standards, data, standardized practices, and store-level execution. It is difficult, if not impossible, to address all of these elements at the same time. Long term, there’s a need for a paradigm shift to store execution supported by smart system best practices, rather than each manager doing their thing in their own way with unreliable system guidance.

Organizations that have focused on forecasting first can leverage basic approaches to calculating hours and be more deliberate about defining best practices, building engineered standards, and enhancing labor modeling. They’ll open the door to enable modeling of standards and drivers to engineered task time while they reduce wasted hours due to forecast inaccuracy. 

For store managers, having a system you can rely on starts with an accurate forecast. It’s fundamental to the paradigm shift, and it’s a solid foundation for both upstream and downstream benefits to build upon. If you want competitive advantage through your WFM solution set, start by upgrading your forecast and build from there.

In a previous post, I discussed the impacts of higher labor costs in retail. This post will explore how retailers can address this significant challenge with the latest generation of workforce management (WFM) tools that leverage expanded use of artificial intelligence (AI), real-time data exchange and more robust enterprise computing platforms. These enhancements offer retailers an inventory of new opportunities for early adopters.

Forecasting has taken significant leaps forward

Everyone knows that forecast accuracy is critical, but understanding the nuances can be tricky. For example, what’s the value of making your store just $1,000 more accurate each week in the forecasting process? Simple math would indicate that for an average store, the quantifiable value means getting 5-8 hours closer to the “right” number of hours based on work content and engineered standards. But in reality, the value is almost always two or three times greater since accuracy at the daily department level can lead to misplacing many more hours through changes in product mix at the department level. I’ve never seen a case where an overspent hour in the meat department justifies an hour short of cashier labor on the front end. Getting closer to the “right” number saves hours and delivers better service with less waste. But how can you get there?

Machine-based learning algorithms have enabled significant improvements in forecast accuracy. With more businesses forecasting earlier to support worker-friendly scheduling timelines further in advance, these enhancements make an incredible difference. This is especially the case when a whole array of algorithms can compete to create the most accurate forecast possible.

The best systems also enable dynamic reforecasting once your forecast and schedule are published. Especially for those retailers scheduling further in advance, this means that you can consider vital inputs on promotional activities and weather for additional fine-tuning and adjustment. Coupled with task-based scheduling, this can give retailers an option to flexibly reallocate task assignments even after the schedule is published in order to best utilize the skills of the team scheduled. The older tools are mostly still wrapped around job-based scheduling and very basic formula-driven forecasting algorithms. Ultimately, the message is clear: stick with outdated functionality at your peril.

Staffing parameters can be better understood and managed

After your labor model calculates raw engineered time (standards applied to volume drivers), a number of processes place/spread hours or modify calculated time to create 15-minute staffing requirements to schedule. Typically, staffing parameters include open and close times, rounding, rounding links, min and max coverage, performance factors, queueing (for service areas), smoothing, etc. Too often, these have remained a mystery corner when creating staffing requirements. New analysis tools make these parameters far easier to understand and manage, with full visibility of the additional hours created over and above that engineered work content.

With the right tools, retailers can expose and manage numerous hours for best-practice task placement and to eliminate waste. Visual mapping of fixed and variable tasks along with the layered impacts of staffing parameters make the analysis quicker and easier. It also exposes new layers of optimization.

Automated scheduling is not just for the front end anymore

Years ago, scheduling systems were adopted mostly to automate the process of writing schedules for front end employees. Front end workload demands correlate directly with interval forecast volumes. This means that once standards are applied (along with rounding, smoothing and a few other staffing parameters), it becomes fairly easy to define scheduling requirements and match those requirements to the employee job and availability pool in order to write a schedule. More complex union or business rules tend to be handled by most systems, and even second- or third-tier systems can usually generate an automated schedule. Some may require more editing than others, but they all offer some convenience over trying to manually build schedules by hand.

For years, the promise has been to carry that functionality over to all departments across the store, or from “wall to wall.” It’s a promise that few systems have the functionality to fulfill. Why? Developing the requirements is far more complex, and placing those requirements is nothing close to the almost-perfect correlation between activity time and scheduling time like it is for front end. Some vendors will say the correct selection of associates is all store-level preference, and not a matter of getting the requirements right. Some will make it easy to copy prior weeks’ schedules and argue that’s all that is needed. Some just prefer to put their efforts into functionality that drives sales in other vertical markets but will say it’s “in their future roadmap.” Good luck waiting to get there.

But the fact is that automated, near edit-free schedules can be created across all store departments. Doing so can free managers’ time for higher-value activities, producing schedules that better reflect the week-specific forecasting needs while adhering to company policies, best practices and regulatory mandates. And when combined with task-based scheduling, most retailers find new opportunities to minimize waste through sensible cross-utilization of associates to address service peaks.

The tools don’t end with schedule publication

Most WFM systems’ functionality ends with schedule publication and associated reports. Thankfully, this is not true of the latest systems. The timing could not be better, because while retailers have always needed to respond to late-breaking changes in weather, promotions and competitive offerings, dealing with these changes is now compounded by further-in-advance schedule writing to support predictive scheduling requirements. Some of these predictive scheduling requirements make it difficult to adjust schedules without penalties, so the ability to manage further out and late-breaking changes with sophistication and precision is more important than ever.

Here’s where task-based scheduling offers the ability to reassign work to the crew as scheduled to make smart adjustments as they become needed. It’s a given that some variation to plans will sometimes have to occur. However, knowing where your performance stands during the week in progress and having tools to modify those workplans to make smart adjustments allows your store team to react sooner and in a best-practice manner. Your best managers are likely already making at least some of these changes, but do all of your managers? And do they make strategic adjustments to preserve your brand and service priorities? The right week-in-progress tools can make all the difference.

Continuous improvement without the costly analyst overhead

“Measure what you manage” and “learn from your experience” are key mantras to any WFM implementation. With that in mind, isn’t it remarkable that few systems capture and save system-generated forecasts and schedules to compare with the final edited and published versions? If you don’t have a strategy or toolset that measures forecast accuracy and scheduling effectiveness along with labor performance, how can you make meaningful progress by leveraging your experience? Here again, AI automation can perform much of this analysis work without the tedious dedication of labor analysts to sift through the details and draw meaningful conclusions. The new tools are based on regular, system-based analytics. Spend less time working on analysis and more time coaching based on system analysis.

Concluding thoughts

While higher labor costs and new employee-friendly policies and restrictions have created new challenges and pressures for both retail companies and for WFM tools, the best of the new systems offer retailers significant opportunities to overcome these challenges. These powerful solutions enable retailers to redeploy managers’ time to sales floor activities, and they create new sources of optimization and competitive advantage for those who are best equipped. In the wake of these exciting technological advancements, retailers are advised to take inventory of their needs and capitalize on opportunities. Start by considering upgrades to the weakest tools in your WFM toolbox, and then let the benefits fund appropriate reinvestments.

So far, we have covered two of the four most common approaches to scheduling. The first was service-based scheduling, which is also known as interval-based scheduling. We discussed supermarket cashiers and baggers as examples of this where the activity is closely tied to the specific day and time interval when the work occurs. You cannot do the work ahead and excess service in the morning does not offset inadequate service in the afternoon.

The second approach was non-service or production scheduling. This is where data are captured at a daily or weekly level. Also, the timing of the work activity, as associated with the volume driver demand (e.g., cases stocked), must be defined rather than implicitly tied to historical data. Grocery stocking is a good example.

Our third scheduling approach is a composite of our first and second. Think about a supermarket deli operation, particularly during the evening hours. Much of the deli work is tied to servicing customers with the data (e.g., customers served, or pounds of product handled) tied to the exact timing of when those customers require service.

However, many know that deli is a very intensive department that requires multitasking. In between serving customers at the service counter, deli clerks perform fixed or variable production tasks. Some of those tasks are scheduled at specific times, while others are intentionally executed amid the flow of customers. That dynamic, the combination of interval-specific work along with non-interval specific production tasks, is what makes for this service and production scheduling approach.

I note this because many retail and specialty retail departments rely on this versatile blend of on-demand customer service tasks in accordance with production, cleaning or preparation tasks in the background. Some systems handle service tasks well, others handle service tasks or production departments only, but few systems handle both within the same department – and it makes a big difference.

If your system doesn’t do this exceptionally, the system may force you to break the department in two and complete separate planning for the “front of the house” (the service portion) and the “back of the house” (the production portion). Yet, it will fall short of the optimization you can get with a system that is designed to support both scheduling types within the same department.

As one learns with the basic tools of carpentry, home maintenance or auto repair, you want to use the right tool for the right job or you will probably bash your thumb, bruise your knuckles, or wish you bought the better toolbox when you did. In our next post, we will introduce the fourth most common scheduling approach. Then, we can move into more detailed discussions on specific system features.

In his series, Scheduling Insights, Dan Bursik provides insights and strategies around effective retail labor scheduling, addressing a diverse array of challenges and topics. To read the previous edition, click here. To search for all editions of Scheduling Insights, click here.

If you are running the Labor Management program for a retail organization, and if you haven’t been disheartened by your workforce management (WFM) provider’s system limitations in its current desktop form, then it is time for you to champion mobile WFM capabilities to your organization.

You know the strengths and shortcomings of your existing WFM suite and how well your operations team is committed to use the tools to driver service improvements and effective labor management in your organization. You may be at a basic level of deployment in some areas and a more detailed status in others. Perhaps you have integrated task and communications management, but know there is still more opportunity there. Perhaps you have solid dynamic scheduling (not copy forward imitation scheduling or anything of that sort) in front end and more basic approaches in your other departments. Maybe your reporting is good, but maybe it seems to be delivered too late to make critical adjustments. Whatever your specifics are, there are some thing that you should find very compelling in giving your WFM toolset an upgrade by making it all functional on a mobile platform.

And that includes giving your people tools that operate in real time, focused on priorities and addressing those exception-based priorities requiring management attention. Here’s are the top 10 reasons why your Labor Director may want mobile WFM solutions:

  1. Simplicity Reigns: The best tools only work if they are simple to use in stores. Managers have limited time off the floor and you need to keep them on the floor, in front of your customers, leading your employees, and making good things happen to fulfill service and brand expectations while meeting your financial goals. Mobile puts all the tools they have available for quick and easy reference on any device or in any place you want it. It keeps your people better connected to the tools that can help them while not being tied to a desktop computer in a backroom or office.
  2. Associates Must Be Engaged: Associate engagement is critical, and managers and associates don’t expect to do business with tools that seem 15 years behind the capabilities of the apps they use to stay connected, communicate with friends, navigate to locations, shop online, view media, reference data, or otherwise manage their lives efficiently. And what employee wants to call in to find out when they got scheduled or to hunt down a manager to request for time off that they need for life reasons important to them? With mobile you’ll give your managers a sense of having the right tools to do their job and your associates a new connectivity to the business that you can’t get in a non-mobile environment.
  3. Mobile Infrastructure: If your company hasn’t made your stores accessible to mobile, your project should no longer to be the one to pay for that infrastructure. Basic customer support and customer-facing apps have probably put that infrastructure in place or are ready to share that initial investment.
  4. Employee Self-Service is a Win-Win: Employee engagement is not only about the tools but about better, faster and closer communication and collaboration including managing more complex skills and availabilities of your associates while scheduling to the needs of your business and, ultimately, your customers. Employee self-service (ESS) functionality via mobile offers associates the ability to collaborate with managers to update availability, time off requests, requests for shift swaps, bids on open shifts available, even the ability to surrender shifts in advance to prevent no shows that really derail service. Mobile ESS will be a popular win with associates and managers and it helps refocus associates on the basic opportunity of skill development and availability management as the basis of scheduled hours.
  5. Accessible Training and Guidance Materials: You are probably the champion of Best Practices in your organization.: Standard Operating Procedures (SOPs) to support business processes, Best Methods to perform key work operations that are documented in Visual Method Sheets or training videos that form the basis of your engineered labor standards, the importance of new associate training, effective coaching, etc. But you know much of this material, good as it is, goes unutilized? And in the absence of a clearly understood best method, your productivity erodes while safety, food safety, service and other quality compromises get made as people deviate from that best method. Mobile makes all that material accessible for a new life – even on the sales floor – where it can make a real difference.
  6. Real-Time Business Reaction: It’s no secret that effective workforce management requires effective planning, forecasting, and smart scheduling to business requirements. But you know better than anyone in your organization that your WFM tools aren’t finished with the job when you post the schedule. It’s about how your organization works that schedule, makes smart adjustments in real time, navigates changes in the business volumes and in the workforce and narrows the delta between the plan and the actual through continuous process improvement. All these needs require the active use of the WFM toolbox beyond a once-a-week schedule writing task. Real time accessibility to the tools on any device, wherever it is needed, is a big step forward to those tools being used in stores as intended.
  7. Effective Execution Management: Operational efficiency is about more than the quantity of the work being done, you know it’s also about the quality of the work performed and the achievement of your brand’s standards of service and operations excellence. Compliance management is greatly enhanced by task management. But if you have task management and it isn’t mobile, it can’t be nearly as effective as it needs to be with real-time rollup of compliance reporting along with the ability to capture pictures, data or supporting evidence that your task isn’t being pencil whipped from a desktop.
  8. Streamlined Communications: Good decisions, especially real-time adjustments to labor plans, requires good internal store communications. Mobile allows for new breakthroughs in communications. Multi-channel communication can put quick, text-based, thread-organized communications into use for better communication between management team members or between department associates as they transition from shift to shift or from a strong close tonight to a strong opening tomorrow morning.
  9. Collaboration: The engagement and support of District Managers and department specialists has always been clear to you. Without them, your labor guidance gets confused and diluted with separate direction coming from field merchandisers or from DM’s who aren’t fully on board. You understand their needs to stay connected to the business, to provide good guidance, and to meet business goals. Mobile is the toolset to keep them engaged, support their impact overall their stores, keep best practices visible, and to work everyone toward signing out of the same hymn book.
  10. Staying Ahead of the Competition: You know from what you read and what you hear from your labor peers across the industry that mobile isn’t a question of “if” it is more a question of “when.” You also know that new tools will build on the mobile capability including automated assistants that will be far more useful than Siri and Alexa. But you also know your organization needs to start walking before competition demands that you run.

All in all, the reasons for mobile seem clear. Your WFM should be ready to guide your journey and help frame your project cost-benefit plan. If that company can’t, it’s time to find a strategic partner that can.

Having led labor management efforts in both large and small organizations helps inform my perspective working for Logile. We see ourselves as a strategic partner to our clients and we are proud of our offerings as, I am sure, my peers in other WFM companies are too.

One of the things we owe our clients is the ability to look ahead, to focus on future opportunities and to develop and guide the pathways to fulfilling those needs and expectations. In Labor Management, and more broadly in store planning and retail operations management support – a footprint we are expanding beyond WFM. Our view of the future indicates that we will see more change at an even faster pace. Competition will only increase in online and in as brick and mortar environments. Promotions will also change to become more targeted, more personally focused.

Success will entail building and retaining customers in a more competitive environment where more things become commodities for price comparison, and where the differentiation of your brand in curated assortment, services, price/value, shopping experience (whether self-service or service assisted), shopping location (whether brick and mortar, online or both) is the key to survive and thrive. No matter how you define it, service will be a compelling differentiator but with a broader overall meaning. Friendly associates are no substitute for out of stock conditions or orders delivered too late.

It’s easy to see some businesses laying better groundwork for this future than others. Few companies want the pain of the bleeding edge, but a strategy of just trying to catch with the advancements of your competitors is not going to establish a base of strength. As with armies on a battlefield, retail competition is all about position in the market and the tools and tactics to advance and defend.

At Logile, we believe you should think about mobile as game-changing technology for workforce management and store operations. It’s a fundamental shift in the platform that will enable better planning to deliver simpler and easier guidance to your troops in your stores.

Mobile is revolutionary. It can deliver meaningful alerts, and prioritized action-items to your people instead of having valuable guidance sitting in their inbox.  Exception based alerts requiring management action must flow to your people, wherever they are. This guidance must be focused and directive, empowering them to take action.

Such a capability makes it possible for your newer managers to be just as focused on those priority items, as your best managers do in leveraging their deeper operational experience. Just think about what that means for your brand effectiveness and for safeguarding customer service to all your customers: morning, afternoon or evening; weekday or weekend.

Mobility for Store Planning and Execution can:

  • Make any data accessible on any device, wherever and whenever it is needed.
  • Engage your people in new and exiting ways, from basic employee self-service interaction to better communication, and automated artificial intelligence recommendations
  • Connect your mangers with better and best practice guidance
  • Support the compliance of your service and brand standards
  • Deliver training and coaching to manage the impact of turnover of a changing workforce
  • Meet customer expectations for immediate answers or actions to meet their needs
  • Meet associate expectations for tools needed to serve your customers well

Look at how mobile convenience has moved into every aspect of modern living. Your children have a tough time imagining a world without the convenience and access of mobile technology. Is there any reason to think that this infrastructure, this platform and virtual language for getting things done in your personal life will not bare fruit in your business growth? As a vehicle for associate empowerment, mobile is an incredible opportunity to revitalize your efforts to detail and differentiate your operational execution strategies.

The wave of change with mobile can also help organizations who have not put focused attention on workforce management, detailing store operational standards, and developing stronger capacitates for workforce planning and execution of your brand. With new connectivity to your associates, and the ability to take the tools onto the sales floor, mobile offers more than just a fresh face for your WFM toolbox. It’s an opportunity to engage, set new goals, clarify the vision, assess sacred cows, and sharpen tools that have already proven valuable. Not often do you get an upgrade that can offer a new layer of benefit driven through existing tools you have already paid for.

At Logile, we work to ensure that our clients are armed with more than buggy whips and a plan to just work last year harder and faster – again. We are committed to advance operational planning and store execution to support the success of your brand and help deliver your financial objectives.

Mobile has already changed the game. We can help you change with it and to enhance your speed, flexibility, and service offerings on this innovative platform. You can wait to follow, or, you can define your business as a leader in terms that offer new ways to differentiate your business for success.

We can help you take the next steps, but the timing is up to you. Let’s get the conversation started.