Big data analytics technology: disruptive and important?

Of all the disruptive technologies we track, big data analytics is the biggest. It’s also among the haziest in terms of what it really means to supply chain. In fact, its importance seems more to reflect the assumed convergence of trends for massively increasing amounts of data and ever faster analytical methods for crunching that data. In other words, the 81percent of all supply chain executives surveyed who say big data analytics is ‘disruptive and important’ are likely just assuming it’s big rather than knowing first-hand.

Does this mean we’re all being fooled? Not at all. In fact, the analogy of eating an elephant is probably fair since there are at least two things we can count on: we can’t swallow it all in one bite, and no matter where we start, we’ll be eating for a long time.

So, dig in!

Getting better at everything

Searching SCM World’s content library for ‘big data analytics’ turns up more than 1,200 citations. The first screen alone includes examples for spend analytics, customer service performance, manufacturing variability, logistics optimisation, consumer demand forecasting and supply chain risk management.

Read more at Big data analytics technology: disruptive and important?

Share your opinions regarding this topic in the comment box below and subscribe us for more updates.

Share on FacebookShare on Google+Share on LinkedInTweet about this on TwitterEmail this to someone

Data Lake vs Data Warehouse: Key Differences

Some of us have been hearing more about the data lake, especially during the last six months. There are those that tell us the data lake is just a reincarnation of the data warehouse—in the spirit of “been there, done that.” Others have focused on how much better this “shiny, new” data lake is, while others are standing on the shoreline screaming, “Don’t go in! It’s not a lake—it’s a swamp!”

All kidding aside, the commonality I see between the two is that they are both data storage repositories. That’s it. But I’m getting ahead of myself. Let’s first define data lake to make sure we’re all on the same page. James Dixon, the founder and CTO of Pentaho, has been credited with coming up with the term. This is how he describes a data lake:

“If you think of a datamart as a store of bottled water – cleansed and packaged and structured for easy consumption – the data lake is a large body of water in a more natural state. The contents of the data lake stream in from a source to fill the lake, and various users of the lake can come to examine, dive in, or take samples.”

And earlier this year, my colleague, Anne Buff, and I participated in an online debate about the data lake. My rally cry was #GOdatalakeGO, while Anne insisted on #NOdatalakeNO. Here’s the definition we used during our debate:

“A data lake is a storage repository that holds a vast amount of raw data in its native format, including structured, semi-structured, and unstructured data. The data structure and requirements are not defined until the data is needed.”

Read more Data Lake vs Data Warehouse: Key Differences

What do you think about this topic? Share your opinions below and subscribe us to get updates in your inbox.

 

Share on FacebookShare on Google+Share on LinkedInTweet about this on TwitterEmail this to someone

Supply Chain & Big Data ÷ Analytics = Innovation

Google the term “advanced analytics” and you get back nearly 23 million results in less than a second.

Clearly, the use of advanced analytics is one of the hottest topics in the business press these days and is certainly top of mind among supply chain managers.

Yet, not everyone is in agreement as to just what the term means or how to deploy advanced analytics to maximum advantage.

At HP, the Strategic Planning and Modeling team has been utilizing advanced operational analytics for some 30 years to solve business problems requiring innovative approaches.

Over that time, the team has developed significant supply chain innovations such as postponement and award winning approaches to product design and product portfolio management.

Based on conversations we have with colleagues, business partners and customers at HP, three questions come up regularly – all of which this article will seek to address.

  1. What is the difference between advanced and commodity analytics?
  2. How do I drive innovation with advanced analytics?
  3. How do I set up an advanced analytics team and get started using it in my supply chain?

Advanced analytics vs. commodity analytics

So, what exactly is the difference between advanced analytics and commodity analytics? According to Bill Franks, author of “Taming The Big Data Tidal Wave,” the aim of commodity analytics is “to improve over where you’d end up without any model at all, a commodity modeling process stops when something good enough is found.”

Another definition of commodity analytics is “that which can be done with commonly available tools without any specialized knowledge of data analytics.”

The vast majority of what is being done in Excel spreadsheets throughout the analytics realm is commodity analytics.

Read more at Supply Chain & Big Data ÷ Analytics = Innovation

What do you think about this topic? Write down your opinions in the comment box below, and subscribe us to get updates in your inbox.

Share on FacebookShare on Google+Share on LinkedInTweet about this on TwitterEmail this to someone

Overcoming 5 Major Supply Chain Challenges with Big Data Analytics

Big data analytics can help increase visibility and provide deeper insights into the supply chain. Leveraging big data, supply chain organizations can improve the way they respond to volatile demand or supply chain risk–and reduce concerns related to the issues.

Sixty-four percent of supply chain executives consider big data analytics a disruptive and important technology, setting the foundation for long-term change management in their organizations (Source: SCM World). Ninety-seven percent of supply chain executives report having an understanding of how big data analytics can benefit their supply chain. But, only 17 percent report having already implemented analytics in one or more supply chain functions (Source: Accenture).

Even if your organization is among the 83 percent who have yet to leverage big data analytics for supply chain management, you’re probably at least aware that mastering big data analytics will be a key enabler for supply chain and procurement executives in the years to come.

Big data enables you to quickly model massive volumes of structured and unstructured data from multiple sources. For supply chain management, this can help increase visibility and provide deeper insights into the entire supply chain. Leveraging big data, your supply chain organizations can improve your response to volatile demand or supply chain risk, for example, and reduce the concerns related to the issue at hand. It will also be crucial for you to evolve your role from transactional facilitator to trusted business advisor.

Read more at Overcoming 5 Major Supply Chain Challenges with Big Data Analytics

What do you think about this article? Share your opinions with us in the comment box.

Share on FacebookShare on Google+Share on LinkedInTweet about this on TwitterEmail this to someone

How Big Data And Analytics Are Transforming Supply Chain Management

Supply chain management is a field where Big Data and analytics have obvious applications. Until recently, however, businesses have been less quick to implement big data analytics in supply chain management than in other areas of operation such as marketing or manufacturing.

Of course supply chains have for a long time now been driven by statistics and quantifiable performance indicators. But the sort of analytics which are really revolutionizing industry today – real time analytics of huge, rapidly growing and very messy unstructured datasets – were largely absent.

This was clearly a situation that couldn’t last. Many factors can clearly impact on supply chain management – from weather to the condition of vehicles and machinery, and so recently executives in the field have thought long and hard about how this could be harnessed to drive efficiencies.

In 2013 the Journal of Business Logistics published a white paper calling for “crucial” research into the possible applications of Big Data within supply chain management. Since then, significant steps have been taken, and it now appears many of the concepts are being embraced wholeheartedly.

Applications for analysis of unstructured data has already been found in inventory management, forecasting, and transportation logistics. In warehouses, digital cameras are routinely used to monitor stock levels and the messy, unstructured data provides alerts when restocking is needed.

Read more at How Big Data And Analytics Are Transforming Supply Chain Management

Share on FacebookShare on Google+Share on LinkedInTweet about this on TwitterEmail this to someone

Automating Big-Data Analysis and Replacing Human Intuition with Algorithms

Big-data analysis consists of searching for buried patterns that have some kind of predictive power.

But choosing which “features” of the data to analyze usually requires some human intuition.

In a database containing, say, the beginning and end dates of various sales promotions and weekly profits, the crucial data may not be the dates themselves but the spans between them, or not the total profits but the averages across those spans.

MIT researchers aim to take the human element out of big-data analysis, with a new system that not only searches for patterns but designs the feature set, too.

To test the first prototype of their system, they enrolled it in three data science competitions, in which it competed against human teams to find predictive patterns in unfamiliar data sets.

Of the 906 teams participating in the three competitions, the researchers’ “Data Science Machine” finished ahead of 615.

In two of the three competitions, the predictions made by the Data Science Machine were 94 percent and 96 percent as accurate as the winning submissions.

In the third, the figure was a more modest 87 percent. But where the teams of humans typically labored over their prediction algorithms for months, the Data Science Machine took somewhere between two and 12 hours to produce each of its entries.

“We view the Data Science Machine as a natural complement to human intelligence,” says James Max Kanter, whose MIT master’s thesis in computer science is the basis of the Data Science Machine.

Read more at Automating Big-Data Analysis and Replacing Human Intuition with Algorithms

What do you think about this article? Share your opinions with us in the comment box and subscribe us to get updates.

Share on FacebookShare on Google+Share on LinkedInTweet about this on TwitterEmail this to someone

The Key to Analytics: Ask the Right Questions

People think analytics is about getting the right answers. In truth, it’s about asking the right questions.

Analysts can find the answer to just about any question. So, the difference between a good analyst and a mediocre one is the questions they choose to ask. The best questions test long-held assumptions about what makes the business tick. The answers to these questions drive concrete changes to processes, resulting in lower costs, higher revenue, or better customer service.

Often, the obvious metrics don’t correlate with sought-after results, so it’s a waste of time focusing on them, says Ken Rudin, general manager of analytics at Zynga and a keynote speaker at TDWI’s upcoming BI Executive Summit in San Diego on August 16-18.

Challenge Assumptions

For instance, many companies evaluate the effectiveness of their Web sites by calculating the number of page hits. Although a standard Web metric, total page hits often doesn’t correlate with higher profits, revenues, registrations, or other business objectives. So, it’s important to dig deeper, to challenge assumptions rather than take them at face value. For example, a better Web metric might be the number of hits that come from referral sites (versus search engines) or time spent on the Web site or time spent on specific pages.

TDWI Example. Here’s another example closer to home. TDWI always mails conference brochures 12 weeks before an event. Why? No one really knows; that’s how it’s always been done. Ideally, we should conduct periodic experiments. Before one event, we should send a small set of brochures 11 weeks beforehand and another small set 13 weeks prior. And while we’re at it, we should test the impact of direct mail versus electronic delivery on response rates.

Read more at The Key to Analytics: Ask the Right Questions

Share on FacebookShare on Google+Share on LinkedInTweet about this on TwitterEmail this to someone