Showing posts with label 2. Challenge. Show all posts
Showing posts with label 2. Challenge. Show all posts

Saturday, February 6, 2016

2.4 Back to the roots: source of all challenges

Although the symptoms described in the previous chapter are diverse, the root cause for most of them is the way companies handle their data.

The existence of many isolated databases ("data silos") impedes the free flow of information in a company. There is no way that a business ever becomes a successful digital enterprise if it holds on to its data silos. The main characteristics of such an enterprise IT landscape are tons of duplicated data pushed around over dozens of interfaces between systems, typically in long batch runs at night. The more silos a company has, the more difficult it becomes to maintain consistent information across all duplicates. In other words: when information is read from one of the data silos, you can never be sure that is it correct. Instead chances are high that it is outdated. There are estimations from independent analyst firms that up to 80% of data that is processed in an enterprise is duplicated. What a nightmare.

Companies have tried to solve the dilemma by bringing in dedicated applications which were in charge to keep all data synchronized. These are called “Enterprise Application Integration” (EAI) systems and a lot of money was spent on it with moderate success. Now companies urgently need more precise information on their business, their customers and partners based on exploding data volumes. The market calls it “Big data analytics” and what exactly is it: bringing in more expensive technology and with it even more complexity. What would we expect from such a solution if the old IT saying holds true: "garbage in, garbage out"?!

In order to get out of this deadly spiral of ever more technology and higher costs, companies need to address the root cause of the whole dilemma which is the existence of too many isolated data silos.

In other words: if companies change the way they handle their data many of the challenges described in the previous chapter will disappear!

2.3 When technology impedes business: impact of fragmented IT landscapes

This chapter alone would fill a whole book. I will focus however on selected examples from what I personally have seen in various industries in the recent years.

Not industry-specific
  • Long process durations leading to dissatisfied customers
  • Incorrect data leading to interruption of transactions
  • High IT costs with a major share of budget spent for operations (90% and more)
  • Inability to innovate, especially in technology-driven processes and models (internet, mobile)
Automotive
  • Incomplete data on buyer/owner impedes personalized offerings
  • Insufficient transparency over complete value chain frequently leads to unpredictable interruptions in production
  • Poor service quality in aftersales
  • Struggle to cope with exploding data volumes produced by modern, connected cars
Banking
  • Insufficient risk management
  • Struggle with compliance reporting, esp. prove of data origin
  • Inability to match velocity of challengers (FinTechs)
Entertainment/Media
  • For a long-time the threat through social media and mobile processes has simply been ignored
  • Unlike other industries Media has to deal with a high share of unstructured data which represents a challenge of its own
Insurance
  • In some respects similar to Banking
  • Struggle to offer highly individualized offerings
  • Challenges in efficient detection and prevention of fraud
  • Challenges in correct prediction of claims
  • Inability to innovate in new digital products
Public Services
  • I can only speak for Germany:
    very old systems (including mainframes) and extremely high degree of fragmentation lead to long process durations, many manual efforts, inability to cope with high volumes (e.g. refugees)
Retail
  • Huge challenges to manage all channels simultaneously with one face to the customer
  • Difficulties to match new consumer behaviours (micro transactions, mobile purchases, etc.)
TelCo
  • Struggle to provide and sell value-add services on top of core communication services

This is not a complete list. And let me emphasize that most of the companies I speak with are well aware of their challenges and are working on it. That is not my point. From my perspective they oftentimes do not ask the right questions and that is the main reason why they won’t get suitable answers which would solve their problems in a sustainable way and make them fit for the future.

2.2 Welcome to the dschungle: situation today

Let’s have a look into today’s system landscapes. Since the end of the mainframe era in the late 1980’s, enterprise applications are built in a client-server architecture. In its simplest configuration two powerful computers share the job: one is responsible for the process logic and the user interaction, the other one for storing the data in a secure and consistent way. We find a large variety of technologies, from hardware, operating system to software. To simplify the picture let’s for now assume that the application is written in Java and runs on a relational database system.


In the beginning of this IT era processor speed, availability of main memory and hard disc size for the database have been the bottlenecks which limited the overall performance of the system. This picture changed in the 1990’s when disc space became cheaper and processors faster. We were able store much more data but now the I/O speed became the bottleneck as everything was still stored on slow magnetic hard discs. So users start to work around this problem by separating applications. At first transactional systems had been separated from analytical systems and the latter ones received their own name: “data warehouse”. Then we separated within the transactional systems, then within the reporting systems, and so on. We kept adding more hardware, we did intelligent data modeling and indexing of data. In the end our system landscape looked liked this:


We have a lot of separated system running on isolated data silos. The icons with the blue gear wheels represent dedicated reporting systems. As many of the systems need identical data we copy it around in nightly batch jobs (indicated by the arrows). This leads to huge redundant data volumes and inconsistent data. All of this is extremely expensive but after all it is still just technology.

The negative business impact of such landscapes is much more serious.

2.1 Data ist the new oil: imperatives of digital business

Today every company needs to do digital business. There is a lot of evidence how digitalization has disrupted traditional business models and how it has blown away 70% of the "Fortune 1.000" companies from the list in just on decade. The role models for the upcoming age are companies like Apple, Google, Amazon, Facebook, AirBnB, Uber, etc.

When a company decides to take on the challenges as well as the huge opportunities of the digital era, it also accepts to play to the rules of this business which in essence are:
  1. Everything which can be digitalized will be digitalized
  2. Mobile first
  3. Survival of the fastest
What we see is the dematerialization of our world at a breathtaking speed fueled by intelligent mobile devices (not only smart phones but also sensors, robots, etc.) which allows us to communicate and to do business at any time, everywhere, with anyone.

While oil has been the essential  resource in the industrial age, data will be the oil of the digital age. For digital enterprise everything depends on what data they have and what they are capable of doing with it. Or, if you want to put it into a simple formula:

Success in digital business = (data quality) x (ability to process huge amounts of information in real-time)

Data quality is about consistency, accuracy and availability of data. Just like a diesel engine will not run with gasoline, digital processes will not run with wrong and outdated data.
The ability to process big data volumes it is not only about the ex-post analysis for future decisions. It is about finding the best solution out of millions of alternatives in a matter of milliseconds – and then executing on it. Whenever I talk about processing of big data volumes in this blog, I have transactional processing with embedded analytical capabilities in mind.