Cloud-Based Analytics for Supply Chain and Workforce Performance

Plex Systems, a developer of cloud ERP for manufacturing, has introduced two new analytic applications designed to provide manufacturers insight into supply chain performance and their workforce.
The new Supply Chain and Human Capital analytic applications build on the library of applications in the IntelliPlex Analytic Application Suite, a broad suite of cloud analytics for manufacturing organizations.

The Plex Manufacturing Cloud is designed to connect people, processes, systems and products in manufacturing enterprises. The goal is not only to streamline and automates operations, but also enable greater access to companywide data. The IntelliPlex suite of analytic applications aims to turn that data into configurable, role-based decision support dashboards–with deep drill-down and drill-across capabilities. The IntelliPlex Analytic Application Suite includes analytics for sales, order management, procurement, production and finance professionals.

IntelliPlex Supply Chain Analytic Application
The new IntelliPlex Supply Chain Analytic application provides a dashboard for managing strategic programs, such as enterprise supplier performance, inventory and materials management and customer success. Metrics include:

  1. On-time delivery and return rates by supplier, part, material, etc.
  2. Production backlog by part group, product time, etc.
  3. Spend by supplier and type, including unapproved spend
  4. Inventory turns and aging based on type, location, etc.
  5. Materials management accuracy, adjustments and trends by type, location, etc.
  6. On-time fill rate, customer lead time, average days to ship, fulfillment by location

Read more at Cloud-Based Analytics for Supply Chain and Workforce Performance

Share your opinions with us in the comment box, and subscribe to get updates in your inbox.

 

Share on FacebookShare on Google+Share on LinkedInTweet about this on TwitterEmail this to someone

Former Microsoft CEO Launches New Tool For Finding Government Data

This Tax Day, former Microsoft CEO Steve Ballmer launched a new tool designed to make government spending and revenue more accessible to the average citizen.

The website — USAFacts.org — has been slow and buggy for users on Tuesday, apparently due to the level of traffic. It offers interactive graphics showing data on revenue, spending, demographics and program missions.

For example, the site prominently features an infographic created to break down revenue and spending in 2014. Revenue is broken down by origin; spending is broken down by what “mission” of government it serves, based on the functions laid out in the Constitution.

It’s a big-picture view of where U.S. tax dollars come from, and how they’re spent. But click on a subcategory and you’re taken to a more detailed, granular view of that spending.

Ballmer didn’t create the site because he was an expert on government data. Quite the opposite, according to The New York Times’ Dealbook.

The Times says that Ballmer’s wife was pushing her newly-retired husband to get more involved in philanthropy. Ballmer said — according to his own memory, as he described the conversation to the Times — “But come on, doesn’t the government take care of the poor, the sick, the old?”

Read more at Former Microsoft CEO Launches New Tool For Finding Government Data

Have thoughts about this article? Share it in the comment box or contact us for discussion. Be the first one to get updates by subscribing us.

Share on FacebookShare on Google+Share on LinkedInTweet about this on TwitterEmail this to someone

IBM Datapalooza Takes Aim At Data Scientist Shortage

IBM announced in June that it has embarked on a quest to create a million new data scientists. It will be adding about 230 of them during its Datapalooza educational event this week in San Francisco, where prospective data scientists are building their first analytics apps.

Next year, it will take its show on the road to a dozen cities around the world, including Berlin, Prague, and Tokyo.

The prospects who signed up for the three-day Datapalooza convened Nov. 11 at Galvanize, the high-tech collaboration space in the South of Market neighborhood, to attend instructional sessions, listen to data startup entrepreneurs, and use workspaces with access to IBM’s newly launched Data Science Workbench and Bluemix cloud services. Bluemix gives them access to Spark, Hadoop, IBM Analytics, and IBM Streams.

Rob Thomas, vice president of product development, IBM Analytics, said the San Francisco event is a test drive for IBM’s 2016 Datapalooza events. “We’re trying to see what works and what doesn’t before going out on the road.”

Thomas said Datapalooza attendees were building out DNA analysis systems, public sentiment analysis systems, and other big data apps.

Read more at IBM Datapalooza Takes Aim At Data Scientist Shortage

Share your opinions in the comment box and subscribe us to get more updates in your inbox.

Share on FacebookShare on Google+Share on LinkedInTweet about this on TwitterEmail this to someone

How can Lean Six Sigma help Machine Learning?

Note that this article was submitted and accepted by KDnuggest, the most popular blog site about machine learning and knowledge discovery.

I have been using Lean Six Sigma (LSS) to improve business processes for the past 10+ year and am very satisfied with its benefits. Recently, I’ve been working with a consulting firm and a software vendor to implement a machine learning (ML) model to predict remaining useful life (RUL) of service parts. The result which I feel most frustrated is the low accuracy of the resulting model. As shown below, if people measure the deviation as the absolute difference between the actual part life and the predicted one, the resulting model has 127, 60, and 36 days of average deviation for the selected 3 parts. I could not understand why the deviations are so large with machine learning.

After working with the consultants and data scientists, it appears that they can improve the deviation only by 10%. This puzzles me a lot. I thought machine learning is a great new tool to make forecast simple and quick, but I did not expect it could have such large deviation. To me, such deviation, even after the 10% improvement, still renders the forecast useless to the business owners.

Read more at How can Lean Six Sigma help Machine Learning?

Leave your comments below and subscribe us to get updates in your inbox.

Share on FacebookShare on Google+Share on LinkedInTweet about this on TwitterEmail this to someone

The Analytics Supply Chain

Businesses across many industries spend millions of dollars employing advanced analytics to manage and improve their supply chains. Organizations look to analytics to help with sourcing raw materials more efficiently, improving manufacturing productivity, optimizing inventory, minimizing distribution cost, and other related objectives.

But the results can be less than satisfactory. It often takes too long to source the data, build the models, and deliver the analytics-based solutions to the multitude of decision makers in an organization. Sometimes key steps in the process are omitted completely. In other words, the solution for improving the supply chain, i.e. advanced analytics, suffers from the same problems that it aims to solve. Therefore, reducing inefficiencies in the analytics supply chain should be a critical component of any analytics initiative in order to generate better outcomes. Because one of us (Zahir) spent twenty years optimizing supply chains with analytics at transportation companies, the concept was a naturally appealing one for us to take a closer look at.

More broadly speaking, the concept of the analytics supply chain is applicable outside of its namesake business domain. It is agnostic to business and analytic domains. Advanced analytics for marketing offers, credit decisions, pricing decisions, or a multitude of other areas could benefit from the analytics supply chain metaphor.

Read more at The Analytics Supply Chain

Please leave your opinions in the comment box below and subscribe us to get more updates in your inbox.

Share on FacebookShare on Google+Share on LinkedInTweet about this on TwitterEmail this to someone

Socialbakers bakes its data analytics down to a Social Health Index

Can social media analytics be compressed into an elevator pitch?

That was a question Lenovo asked its social analytics firm, Socialbakers. The result, launching today, is a Social Health Index that presents a few top-level indicators of a brand’s standing in social media vis-a-vis any competitors.

“When you’re with a VP, you have to [quickly] give them a very clear idea of where we stand,” Lenovo’s director of the Digital and Social Center of Excellence Rod Strother told us. Given that need, Lenovo then provided input to Socialbakers for developing the Index.

It offers a single top-level number on a 100-point scale, as well as single numbers representing the client’s — or a competitor’s — social health on Facebook, Twitter, or YouTube. Other platforms will be added at some point, the social analytics firm said.

Additionally, an area graph visually depicts the four groups of data that go into the scores — participation, follower/fan/subscriber acquisition and retention, and shareability.

“We find it’s difficult for clients to comprehend all” the statistics in ordinary social analytics reports, Socialbakers’ CEO and co-founder Jan Rezab told VentureBeat.

“It’s very, very complicated,” he said, noting that his firm tracks over 180 metrics for social media.

Read more at Socialbakers bakes its data analytics down to a Social Health Index

Share your opinions with us in the comment box. Subscribe to get updates in your inbox.

Share on FacebookShare on Google+Share on LinkedInTweet about this on TwitterEmail this to someone

The Bank of England has a chart that shows whether a robot will take your job

robot jobs

The threat is real, as this chart showing the rise and fall of various jobs historically shows. Agricultural workers were replaced largely by machinery decades ago. Telephonists have only recently been replaced by software programmes. This looks like good news for accountants and hairdressers. Their unique skills are either enhanced by software (accountants) or not affected by it at all (hairdressers).

The BBC website contains a handy algorithm for calculating the probability of your job being robotised. For an accountant, the probability of vocational extinction is a whopping 95%. For a hairdresser, it is 33%. On these numbers, the accountant’s sun has truly set, but the relentless upwards ascent of the hairdresser is set to continue. For economists, like me, the magic number is 15%.

Another data analysis about jobs which will be phased out as time goes. It is an interesting analysis of historical job data. However, after I glanced through the bank report referenced in the article, I am not sure robots are the reason of the job replacement. For example, it could be replaced by cheap labor in foreign countries. The bank report shows only the jobs subject to be phased out due to technology advancement. People could just become productive. So, do not take robots too seriously!

Read more at The Bank of England has a chart that shows whether a robot will take your job

What do you think about this article? Share you opinions in the comment box and subscribe us to get updates.

Share on FacebookShare on Google+Share on LinkedInTweet about this on TwitterEmail this to someone

Big data analytics technology: disruptive and important?

Of all the disruptive technologies we track, big data analytics is the biggest. It’s also among the haziest in terms of what it really means to supply chain. In fact, its importance seems more to reflect the assumed convergence of trends for massively increasing amounts of data and ever faster analytical methods for crunching that data. In other words, the 81percent of all supply chain executives surveyed who say big data analytics is ‘disruptive and important’ are likely just assuming it’s big rather than knowing first-hand.

Does this mean we’re all being fooled? Not at all. In fact, the analogy of eating an elephant is probably fair since there are at least two things we can count on: we can’t swallow it all in one bite, and no matter where we start, we’ll be eating for a long time.

So, dig in!

Getting better at everything

Searching SCM World’s content library for ‘big data analytics’ turns up more than 1,200 citations. The first screen alone includes examples for spend analytics, customer service performance, manufacturing variability, logistics optimisation, consumer demand forecasting and supply chain risk management.

Read more at Big data analytics technology: disruptive and important?

Share your opinions regarding this topic in the comment box below and subscribe us for more updates.

Share on FacebookShare on Google+Share on LinkedInTweet about this on TwitterEmail this to someone

Data Lake vs Data Warehouse: Key Differences

Some of us have been hearing more about the data lake, especially during the last six months. There are those that tell us the data lake is just a reincarnation of the data warehouse—in the spirit of “been there, done that.” Others have focused on how much better this “shiny, new” data lake is, while others are standing on the shoreline screaming, “Don’t go in! It’s not a lake—it’s a swamp!”

All kidding aside, the commonality I see between the two is that they are both data storage repositories. That’s it. But I’m getting ahead of myself. Let’s first define data lake to make sure we’re all on the same page. James Dixon, the founder and CTO of Pentaho, has been credited with coming up with the term. This is how he describes a data lake:

“If you think of a datamart as a store of bottled water – cleansed and packaged and structured for easy consumption – the data lake is a large body of water in a more natural state. The contents of the data lake stream in from a source to fill the lake, and various users of the lake can come to examine, dive in, or take samples.”

And earlier this year, my colleague, Anne Buff, and I participated in an online debate about the data lake. My rally cry was #GOdatalakeGO, while Anne insisted on #NOdatalakeNO. Here’s the definition we used during our debate:

“A data lake is a storage repository that holds a vast amount of raw data in its native format, including structured, semi-structured, and unstructured data. The data structure and requirements are not defined until the data is needed.”

Read more Data Lake vs Data Warehouse: Key Differences

What do you think about this topic? Share your opinions below and subscribe us to get updates in your inbox.

 

Share on FacebookShare on Google+Share on LinkedInTweet about this on TwitterEmail this to someone

Supply Chain & Big Data ÷ Analytics = Innovation

Google the term “advanced analytics” and you get back nearly 23 million results in less than a second.

Clearly, the use of advanced analytics is one of the hottest topics in the business press these days and is certainly top of mind among supply chain managers.

Yet, not everyone is in agreement as to just what the term means or how to deploy advanced analytics to maximum advantage.

At HP, the Strategic Planning and Modeling team has been utilizing advanced operational analytics for some 30 years to solve business problems requiring innovative approaches.

Over that time, the team has developed significant supply chain innovations such as postponement and award winning approaches to product design and product portfolio management.

Based on conversations we have with colleagues, business partners and customers at HP, three questions come up regularly – all of which this article will seek to address.

  1. What is the difference between advanced and commodity analytics?
  2. How do I drive innovation with advanced analytics?
  3. How do I set up an advanced analytics team and get started using it in my supply chain?

Advanced analytics vs. commodity analytics

So, what exactly is the difference between advanced analytics and commodity analytics? According to Bill Franks, author of “Taming The Big Data Tidal Wave,” the aim of commodity analytics is “to improve over where you’d end up without any model at all, a commodity modeling process stops when something good enough is found.”

Another definition of commodity analytics is “that which can be done with commonly available tools without any specialized knowledge of data analytics.”

The vast majority of what is being done in Excel spreadsheets throughout the analytics realm is commodity analytics.

Read more at Supply Chain & Big Data ÷ Analytics = Innovation

What do you think about this topic? Write down your opinions in the comment box below, and subscribe us to get updates in your inbox.

Share on FacebookShare on Google+Share on LinkedInTweet about this on TwitterEmail this to someone