Putting BI where it belongs – in the hands of those who can take action from insight

There have been a few interesting articles popping up recently that showcase “best practice” BI deployments.

The first is an article in Computing describing how Carphone Warehouse have delivered store performance dashboards nationwide by providing Android tablets running Microstrategy.

This demonstrates the best possible use of BI.  The organisation has not only seemingly achieved the gold standard of having a “single version of the truth” with respect to having a single source of information for store sales performance, they have also put the BI tools used to generate those figures in the hands of each store manager. 

This shortcuts any delays caused by producing reports centrally and then distributing, and also appears to be geared up to allow every staff member to view their own performance (and compare against their colleagues).  The approach allows staff to trial out new strategies and sales techniques and quickly see the results alongside their peers and other stores.  Carphone Warehouse are quoted as describing their vision to expand the information available so it can assist in targetting campaigns and also improving efficency across the board – and even more radical is the talk of “gamification” by awarding badges and achievements.

What isn’t really discussed in the article is the level of user adoption and how well received the new figures have been – but assuming this hurdle has been overcome then this is a very laudible implementation.

Adidas have also recently presented on how they are using BI, together with a data warehouse consolidation project, to provide a consolidated view of their customers to their partners.  The focus here is B2B but the same intentions are clear – present unambigous figures directly to those who have interest in them and also those who are in a position to modify their behaviour and strategy based on those figures.

BI Strategy – Why does it matter?

Over the coming weeks I will be posting multiple times on how to develop a sound corporate strategy for BI.  But before that, it is important to discuss why a strategy for BI is important and the pitfalls it is key to avoiding.

First and foremost, if there isn’t a strategy in place for BI in the organisation, how do you ever know when things are successful, and when you’ve finished?  In other words no strategy equals no target landscape.

Following on from this, if you have no target it is not possible to plot milestones towards that target.  Therefore it is very difficult to show progress in any real terms and it will also be much harder to calculate value derived from the initiative.

Adopting a corporate wide BI strategy implies that a holistic approach is being taken to any BI initiative.  We all know that any BI solution will not be implemented “big bang”, therefore it is imperative to have a common goal in sight as each piece of the overall BI solution is dropped into place.  This avoids the very real risk of the BI initiative creating yet more silos of data – often the very situation trying to be moved away from.

A key part of establishing the strategy will involve setting up a culture where information is valued, and the BI initiative is seen as the progression towards a mature information enterprise.

The ethos here is that success in BI isn’t just about succeeding technologically…success is far more dependent on people and culture.  This theme will crop up again and again, across all areas of Information Architecture from Data Quality, to Data Integration, Data Visualisation, and so on.

The aim is to provide a BI framework and architecture that is capable of telling the organisation the following:

  • How are we performing (now, or at any point in the past)?
  • Why was our performance the way it was (now, or at any point in the past)?
  • What should we be doing to perform better?

And crucially, not only should the BI solution be able to answer all of the above, it should be seen as the default “go to” place to get these questions answered, and the answers trusted.


Data Visualisation and Story Telling – the future of analytics

The path to a mature information orientated organisation is a long one, with many milestones along the way.

Having achieved a situation where all your analytics are produced from a consolidated set of reporting tools, and your metrics are produced consistently from a standardised data warehouse you may be tempted to think your work is done.

However, whilst this is a significant step towards maturity, there is still more to be done.

A full review of all of the analytics, reports, etc produced needs to be undertaken.  All reports should be catalogued, and then analysed for overlap and also any potential conflicts – these can still arise due to differing filters (or even worse calculations applied on top of your pre-derived DW metrics).  This catalogue should include details on:

  • The metrics contained within the report
  • The dimensions included, and any filtering applied
  • The report owner
  • The report users/ recipients
  • A description of the purpose of the report, and the business processes it supports

There are two main tasks to achieve at the point you arrive at a full catalogue of reports:

1) Identify and remove all overlap and repeated reporting.  Also, if possible, speak to the report owners and recipients and try to gauge the report’s usefulness…i.e. is it something they await with baited breath each month, or it is “filed” without even being read?

2) Identify the key aspects within the report that are of interest.  This is typically areas of under or over performance but you need the business to guide here.

For each of the reports identified in (2) it is often the case that data story telling can greatly increase the usefulness of each report, which in turn will help with any issues regarding “unread” reports found in (1)

The key to successful story telling is to keep things as brief as possible, and crucially ensure that each story can stand alone and still make full sense – in other words the context of the story/ analysis is fully contained within the report.

Data visualisations should be used to complement the narrative being produced…the old adage of “a picture paints a thousands words” is still true, however the narrative is equally important.  Why?  Because the whilst the data visualisation presents the hard numeric facts in an easily digestable format, the narrative engages the reader at an emotional level and this is key to aiding understanding the impact of what is being shown, and also inspiring the reader to turn insight into action and deal with the situation.

There is an art to effective data story telling though, in fact it is often referred to as “data journalism”.  There is no perfect recipe and unforunately it is a case of trial and error and very much depends on both the culture of the organisation as a whole, as well as the personalities of the report audience.  You will find that a constant evolution and refinement process is needed to maximise the effectiveness of these reports, and also be ready for a few false starts as you approach new audiences and/ or subjects.

Enjoy the ride though, for me this is one of the most exciting areas of business information architecture as it is where all our hard work comes to fruition and is fully appreciated within the organisation.

Best Practices for Data Integration – Part 4 – Data Quality Baked In

Implementing a data quality initiative within a data warehouse is almost always a three stage process.  First, a profiling and discovery phase is undertaken to both understand the current data quality state, and also identify and agree the data quality rules.

With the rules defined, the second stage is to run a bulk update to align data already in the data warehouse to the rules, and also deal with records that fail the DQ rules in such a way that they cannot be updated.

So far so good…however getting through stages 1 and 2 is a tough task that could easily take upwards of a year to accomplish.  Invariably there will be cause to go back to source systems to understand why poor quality data is being supplied, bugs in the integration process may well be unearthed which have also lead to data quality problems, and so on.  There are entire books devoted to this subject and I’m not going to attempt to cover this in detail here, but there will be posts appearing on here.  If you can’t wait that long I suggest you pick up a copy of  Larry English’s brilliant Improving Data Warehouse and Business Information Quality book – this is deemed to be the seminal text on the subject,

Continue reading

Best Practices for Data Integration – Part 3 – Backwards Compatability

Invariably during the lifetime of the data warehouse (or any other data store for that matter) that is on the receiving end of the data integration flows challenges to the correctness and validity of data will be raised.  These can come from all angles, from a customer calling in to complain because they received an inappropriate communication through to internal checks and processes that can reveal errors.

Continue reading

Best Practices for Data Integration – Part 2 – Integration 101

This second post in the Data Integration series concentrates on design principles that should be applied to all data integration data flows that are designed and built.

Following these guidelines will provide a robust framework within which all data movement and transformation takes place, and will ensure an easier ride after the transition into live service is made.

Continue reading

Best Practices for Data Integration – Part 1 – Attributes

This first post in the Data Integration series concentrates on a best practice approach to defining a standard set of attributes to be stored on every data warehouse table.

Continue reading

Best Practices for Data Integration within an Enterprise Data Warehouse – new Series of posts

Data Integration has been at the centre of many of the Enterprise Data Warehouse (EDW) projects I’ve been involved in over the years and along the way I’ve developed a set of best practices that if followed will help ensure a successful and easily-maintainable solution.  I’ve decided to create a series of posts as the original post was becoming rather lengthy!

5 things that won’t happen with Big Data in 2014

This is a really brief post, we’re now half way through the year and the predictions posted in February 2014 by Yugal Joshi appear to be holding firm so far:

In summary they are:

  1. HADOOP will not replace ETL
  2. Analytics will still be undemocratic
  3. Big Data will still be a project
  4. Real talent will be tough to find
  5. Integration will be a challenge

Read the full post here including the rationale behind each of these statements: http://www.everestgrp.com/2014-02-big-data-analytics-in-2014-5-things-that-wont-happen-gaining-altitude-in-the-cloud-13135.html

Cutting through the Big Data Hype

Having recently attended the Big Data Analytics conference in London recently, it was very interesting to watch the mixture of presentation content and style.  One of the over-arching themes was around the “cycle” of hype followed by disillusionment that “the next big thing” goes through.  2013 certainly seems to be regarded as the peak of all the hype and slowly but surely we’re now about to head downhill towards despair and frustration.

Although Big Data brings with it technology and analysis techniques that aren’t naturally present in mainstream BI projects, the unavoidable truth is that Big Data is still based on…well…data.

Continue reading