Supply Chain & Big Data ÷ Analytics = Innovation

Google the term “advanced analytics” and you get back nearly 23 million results in less than a second.

Clearly, the use of advanced analytics is one of the hottest topics in the business press these days and is certainly top of mind among supply chain managers.

Yet, not everyone is in agreement as to just what the term means or how to deploy advanced analytics to maximum advantage.

At HP, the Strategic Planning and Modeling team has been utilizing advanced operational analytics for some 30 years to solve business problems requiring innovative approaches.

Over that time, the team has developed significant supply chain innovations such as postponement and award winning approaches to product design and product portfolio management.

Based on conversations we have with colleagues, business partners and customers at HP, three questions come up regularly – all of which this article will seek to address.

  1. What is the difference between advanced and commodity analytics?
  2. How do I drive innovation with advanced analytics?
  3. How do I set up an advanced analytics team and get started using it in my supply chain?

Advanced analytics vs. commodity analytics

So, what exactly is the difference between advanced analytics and commodity analytics? According to Bill Franks, author of “Taming The Big Data Tidal Wave,” the aim of commodity analytics is “to improve over where you’d end up without any model at all, a commodity modeling process stops when something good enough is found.”

Another definition of commodity analytics is “that which can be done with commonly available tools without any specialized knowledge of data analytics.”

The vast majority of what is being done in Excel spreadsheets throughout the analytics realm is commodity analytics.

Read more at Supply Chain & Big Data ÷ Analytics = Innovation

What do you think about this topic? Write down your opinions in the comment box below, and subscribe us to get updates in your inbox.

Share on FacebookShare on Google+Share on LinkedInTweet about this on TwitterEmail this to someone

Automating Big-Data Analysis and Replacing Human Intuition with Algorithms

A new and unique computer system from MIT has outperformed human intuition using its algorithms, and it’s amazing, and perhaps a little frightening: the Data Science Machine beat out over 600 human teams in finding predictive analysis.

Big-data analysis consists of searching for buried patterns that have some kind of predictive power.

But choosing which “features” of the data to analyze usually requires some human intuition.

In a database containing, say, the beginning and end dates of various sales promotions and weekly profits, the crucial data may not be the dates themselves but the spans between them, or not the total profits but the averages across those spans.

MIT researchers aim to take the human element out of big-data analysis, with a new system that not only searches for patterns but designs the feature set, too.

To test the first prototype of their system, they enrolled it in three data science competitions, in which it competed against human teams to find predictive patterns in unfamiliar data sets.

Of the 906 teams participating in the three competitions, the researchers’ “Data Science Machine” finished ahead of 615.

In two of the three competitions, the predictions made by the Data Science Machine were 94 percent and 96 percent as accurate as the winning submissions.

In the third, the figure was a more modest 87 percent. But where the teams of humans typically labored over their prediction algorithms for months, the Data Science Machine took somewhere between two and 12 hours to produce each of its entries.

Read more at Automating Big-Data Analysis and Replacing Human Intuition with Algorithms

Share your opinions with us in the comment box. Subscribe to get updates in your inbox.

Share on FacebookShare on Google+Share on LinkedInTweet about this on TwitterEmail this to someone

Bringing Elegance and Simplicity to Problem Solving and Enterprise Technology Adoption

There’s an old expression, “Don’t work harder; work smarter.” Old as it may be, this is one of the adages of New Purchasing: The answer to complexity does not have to be more complexity.

Is this not the reason for enterprise technology? Organizations adopt solutions that enable their employees to work more quickly, more efficiently and with better organization. Really, this is the same reason that many people adopt technology in their personal lives, as well.

If you’re looking to build a website, you no longer need to code everything from scratch. Instead, services from sources like Google and Homestead can do that for you. With Google Domains, you can easily find a domain and build a website for your business, while their innovation services provide developer tools, APIs and other resources for quickly adding novel features. Similarly, Homestead offers you the means to “Get a site. Get found. Get customers.”

Each of these solutions providers offers you a simple, elegant solution for what seems like a pretty daunting task. Wouldn’t you expect the same technology treatment for improving your enterprise procurement?

Just as building a website for a personal blog or corporate website has never been easier, the same is true for creating an online shopping site. Shopify’s solution can help you to create an online storefront for one product or millions – without needing any specific design skills. With a platform like Mobify, you can even extend that digital marketplace with mobile touch points.

Read more at Bringing Elegance and Simplicity to Problem Solving and Enterprise Technology Adoption

What do you think about this topic? Share your opinions with us in the comment box.

Share on FacebookShare on Google+Share on LinkedInTweet about this on TwitterEmail this to someone

Code red: Big data risk management requires a safety net

Code red: Big data risk management requires a safety net

When I advise leaders on a strategy that includes data science, I ask them to consider the probability that their great idea won’t bear fruit. It’s a tough space for visionary leaders to enter — their optimism is what makes them great visionaries. That said, most data science ventures don’t turn out, and most leaders aren’t in touch with the reality that the odds are against them. Having a fallback plan makes good sense, and having a fallback plan for your fallback plan makes great sense.

For instance, when I rolled out an upgraded loyalty platform for a large financial transaction processing company in 2010, we built four plans that successively addressed the failed execution of its predecessor plan. Fortunately, we never had to pull the trigger on even the first fallback plan; however, we were fully prepared for any and all scenarios. It’s a prudent approach that I recommend for you as well, because data science is a risky endeavor.

The colors of cautious management

The best leaders have a backup plan for their backup plan. In fact, when running a strategy that incorporates big data analytics, I suggest you have a series of colored plans: green, yellow, red, and blood red (or black).

  • Green is your plan of record.
  • Yellow is a contingent plan.
  • Red doesn’t meet your minimum expectations, but it doesn’t set you strategically backward either.
  • Blood red is your worst case scenario.

What do you think about this topic? Share your opinions below or contact us for discussion. If you enjoyed reading this blog, consider subscribing it to get the latest update in your inbox.

Share on FacebookShare on Google+Share on LinkedInTweet about this on TwitterEmail this to someone

Leverage cloud financial intelligence systems with AWS

Leverage cloud financial intelligence systems with AWS

The use of cloud financial intelligence systems, typically from cloud financial management system providers, offers insights into cloud usage. Cloud financial management providers, such as Cloud Cruiser and others, can tell you how effective the cloud platforms are in delivery of services. This includes how each service tracks back to cloud resources that support the services, as well as who is consuming the services and by how much.

However, the true value of these systems is not the simple operational cost data that they are able to gather and report on — it’s the ability to leverage deeper analytics to determine usage patterns, and how those patterns will behave over time. This means you have the ability to better understand how your AWS instances (and other cloud services) were put to use in the past, and more importantly, how they will be leveraged in the future, including the ability to properly estimate cloud resource utilization in the context of complex and widely distributed architectures.

It’s all about the ability to make the most out of data from multiple components of the architecture, not just AWS. Most enterprises that deploy cloud-based systems do so using either public and private clouds within a multi-cloud architecture, which may also be mixed with traditional (or legacy) systems. This makes the financial tracking much more complex, but also much more valuable.

For example, a production management system may leverage core storage services from AWS, session management services from their OpenStack private cloud and core database services using a traditional Oracle database running in their data center. Thus, the cloud financial management system needs to gather information for many different system components, including the private and public clouds , as well as the local database. System owners can use this information to determine the amount of resources consumed, as well as patterns of consumption over time. They have a complete picture as to how a holistic system is functioning, including cloud and non-cloud components.

If you have any opinions or suggestions, leave your comments below. Please do not hesitate to contact us via e-mail and subscribe this blog.

Share on FacebookShare on Google+Share on LinkedInTweet about this on TwitterEmail this to someone

The Difference Between Infographics, Instructographics and Data Visualisations

The Difference Between Infographics, Instructographics and Data Visualisations

Infographic is a well-known term in the marketing world, but what are data stories and instructographics? There is some debate about the differences between them all, especially when it comes to data stories, also known as data visualisations, and infographics. Whilst they hold some similarities, there are some key factors which make them quite distinct from each other.

What are infographics?

Infographics are created to tell a story about something. They can be about almost any topic, from how much plastic the world uses to what makes a successful mobile app; but they are always aimed at a specific audience. Essentially, if you have some interesting facts or data to share, infographics are the most accessible way to do it. They’re clear, look attractive and are therefore very shareable. Although your audience enjoys evergreens and blogs, remember that they often don’t have time to read the whole thing. An infographic provides a neat summary of the information they need to know, so they can be a welcome break from the walls of text they see all day, every day.

How do instructographics differ?

Instructographics usually cover a DIY task, but again, they can cover almost any topic. Just like an infographic, they have the potential to go viral and are made to look as attractive as possible. Although a well-written ‘how to’ guide can cover much more information than an instructographic, they often aren’t as visually appealing or easy to follow.

…and data visualisations?

Data visualisations are much like an unrefined infographic. They present quantifiable information and so are more likely to focus on numbers. In some cases, an entire data set is shown without editing and they rarely take a lot of handiwork to produce. They are much more likely to be generated by computer programs using algorithms, as their overall look isn’t too important.

Please leave comments below if you have opinions. You may also send us a message.

 

Share on FacebookShare on Google+Share on LinkedInTweet about this on TwitterEmail this to someone

A Harvest of Company Details, All in One Basket

A Harvest of Company Details, All in One Basket

Trolling government records for juicy details about companies and their executives can be a ponderous task. I often find myself querying the websites of multiple federal agencies, each using its own particular terminology and data forms, just for a glimpse of one company’s business.

But a few new services aim to reduce that friction not just for reporters, but also for investors and companies that might use the information in making business decisions. One site, rankandfiled.com, is designed to make company filings with the Securities and Exchange Commission more intelligible. It also offers visitors an instant snapshot of industry relationships, in a multicolored “influence” graph that charts the various companies in which a business’s officers and directors own shares. According to the site, pooh-bahs at Google, for example, have held shares in Apple, Netflix, LinkedIn, Zynga, Cisco, Amazon and Pixar.

Another site, Enigma.io, has obtained, standardized and collated thousands of data sets — including information on companies’ lobbying activities and their contributions to state election campaigns — made public by federal and state agencies. Starting this weekend, the public will be able to use it, at no charge, to seek information about a single company across dozens of government sources at once.

Welcome to leave any comment below or send us a message

Share on FacebookShare on Google+Share on LinkedInTweet about this on TwitterEmail this to someone

Why Google Flu is a failure: the hubris of big data

Why Google Flu is a failure: the hubris of big data

People with the flu (the influenza virus, that is) will probably go online to find out how to treat it, or to search for other information about the flu. So Google decided to track such behavior, hoping it might be able to predict flu outbreaks even faster than traditional health authorities such as the Centers for Disease Control (CDC).

Instead, as the authors of a new article in Science explain, we got “big data hubris.” David Lazer and colleagues explain that:
“Big data hubris” is the often implicit assumption that big data are a substitute for, rather than a supplement to, traditional data collection and analysis.

The problem is that most people don’t know what “the flu” is, and relying on Google searches by people who may be utterly ignorant about the flu does not produce useful information. Or to put it another way, a huge collection of misinformation cannot produce a small gem of true information. Like it or not, a big pile of dreck can only produce more dreck. GIGO, as they say.

Google’s scientist first announced Google Flu in a Nature article in 2009. With what now seems to be a textbook definition of hubris, they wrote:
“…we can accurately estimate the current level of weekly influenza activity in each region of the United States, with a reporting lag of about one day.”

If you have any opinion, feel free to send us a messageor leave your comment below.

 

Share on FacebookShare on Google+Share on LinkedInTweet about this on TwitterEmail this to someone

Dashboards Help CIOs Manage Business Services

CIO.com has recently released an article introducing the following 6 business dashboards by putting data from a variety of enterprise applications and services at a CIO’s fingertips so he or she can better manage employees, website activity, development projects and company resources.

Kapta Dashboard

Kapta shows a summary of employee goals in an easily identifiable red, yellow and green—map to the overall company goals with a heat map. The map shows a breakdown of the percentage completion rate by groups within IT.

Continue reading

Share on FacebookShare on Google+Share on LinkedInTweet about this on TwitterEmail this to someone