Best Practices for Building a Data Warehouse

July 6, 2020

Data is collected at regular intervals from source systems such as ERP applications that store company information. When this data is moved to a dedicated data warehouse, data quality is improved by cleansing, reformatting, and enriching with data from other sources. This data warehouse then becomes the main source of information for reporting and analysis and can be used for ad-hoc queries and dashboards.

1. Identify Why You Need a Data Warehouse

Many organizations fail to implement a data lake as they haven’t identified a clear business case for it. Organizations that begin by identifying a business problem for their data, and stay focused on finding a solution, are more likely to be effective. Here are some of the key reasons why you need a data warehouse:

Standardize your data – Data warehouses store data in a standard format making it easier for business leaders to analyze it and gain actionable insights. Standardizing the data collected from different sources minimizes the risk of errors and improves overall accuracy.

Improve decision making – Many businesses make decisions without analyzing and getting the full picture from their data, whereas successful businesses develop data-driven plans and strategies. Data warehousing improves the speed and efficiency of data access, enabling business leaders to formulate data-driven strategies and have an edge over the competition.

Reduce costs – Data warehouses allow decision-makers to dive deeper into historical data and evaluate the success of past initiatives. They can see how they need to change their approach to reduce costs, increase operational efficiencies, and drive growth, thereby improving their bottom line.

Data Warehouse vs. Data Lake

Data Warehouse – A data warehouse collects and stores data from various disparate data sources, including an organization’s operational databases as well as external systems. Data warehouses generally store structured, transactional data, and support predefined and repeatable analytics needs. A data warehouse is best-suited for specific use cases where the requirements are clearly defined. It generally supports a fixed processing strategy and is suitable for complex queries and stringent performance requirements.

Data Lake – A data lake is a collection of typically unstructured data collected from a wide range of sources. Data lakes usually support exploratory analysis and data science activities, potentially across a wide range of analytics use cases. Data lakes support many different processing approaches including data discovery, machine learning, heavy batch computation, and more.

Here are some of the key differences:

Data Lake Data Warehouse
Data structure Contains unstructured or raw data Contains structured or processed data that are ready for queries
Purpose of data The reason for storing data is undefined The reason for storing data is already defined
Users Used by data scientists Used by business users
Accessibility  Highly accessible and can be updated quickly More complicated and making changes can be expensive
Maturity Emerging technology Strong maturity model

2. Have an Agile Approach Instead of a Big Bang Approach

Depending on the complexity, it can take a few months to several years to build a modern data warehouse. During the data warehouse implementation, the business cannot realize any value from their investment. The business requirements also evolve over time and sometimes differ significantly from the initial set of requirements. A big bang approach to data warehousing has a high risk of failure because businesses put the project on hold (sometimes before the warehouse is even completed) as they don’t see immediate results. The big bang approach also cannot be tailored to a specific industry, company, or vertical.

Following an agile approach enables the data warehouse to evolve with the business requirements and focus on current business problems. The agile model is an iterative process in which modern data warehouses are developed in multiple sprints, involving the business user throughout the process for continuous feedback. This provides quick results instead of waiting for many months or years. Agile data warehouse development typically has a lower TCO compared to the traditional big bang approach.

3. Analyze and Understand Your Data

A data warehouse is a central repository where information is collected from multiple data sources. In order to get the maximum value from a data warehouse, the data stored in it must be clean, accurate, and consistent. Therefore, it is important to identify all the data sources and understand the characteristics of all possible data sources and the dependencies between them. In an ideal scenario, all this information comes from an integrated, enterprise-wide data model. This approach reduces the time needed to build and maintain a data warehouse and improves the data quality in the data warehouse.

4. Analyze How Frequently You Need to Load Data

Batch processing is an efficient way to process large volumes of data all at once when a number of transactions are collected over a period of time. Data is collected, entered, processed, and then the batch results are produced. It helps businesses reduce operational costs as it doesn’t require specialized data entry personnel to support its functioning. In contrast, real-time data processing involves a continual input, process, and output of data. While batch processing is suitable for most organizations, some organizations need real-time data processing for specific use cases. Real-time data processing and analytics allows an organization to take immediate action and is useful in cases where timely action is important. Real-time processing allows relevant stakeholders to gain the right insight, to take the right action, at the right time.

5. Define a Change Data Capture (CDC) Policy for Real-Time Data

Defining a change data capture (CDC) policy enables you to capture any changes that are made in a database, and ensures that these changes are replicated in the data warehouse. The changes are tracked, captured, and stored in relational tables called change tables. These change tables provide a view of historical data that has been changed over time. CDC is a highly efficient mechanism for reducing the impact on the source when loading new data into your data warehouse. It eliminates the need for bulk load updating and inconvenient batch windows. It can also be used to populate real-time analytics dashboards, and optimize your data migrations.

6. Prefer ELT Tools Instead of ETL

Data warehouses typically use either the extract, transform, load (ETL) or the extract, load, transform (ELT) data integration method. ETL and ELT are two of the most common methods of collecting data from multiple sources and storing it in a data warehouse. The main advantage of ELT over ETL is the flexibility and ease of storing new, unstructured data.  With ELT, you can store all types of information, including unstructured information providing immediate access to all of your information and saves BI analysts time when dealing with new information.

7. Choose Whether You Want it On-Premise or in the Cloud

Should you deploy an on-premise data warehouse or in the cloud? A data warehouse consolidates business data from on-premise and cloud applications, serving as a single repository to support analytics and decision making. Many organizations are choosing to replace their on-premises data warehouses with cloud-based alternatives. On-premises data warehouses provide full control over the tech stack, but you need to purchase, deploy, and maintain all hardware and software. It offers better governance and regulatory compliance as all the data is stored in-house.

Cloud-based, modern data warehouses provide on-demand scalability and cost-efficiency (no need for hardware, server rooms, IT staff, or operational costs), with bundled capabilities such as identity and access management and analytics. The upfront investment is very low and the cloud provided is responsible for data security. Another advantage of cloud data warehouses is that they offer better system uptime and availability. Handing over the maintenance and management of a data warehouse to a vendor frees up valuable time and resources that can be used for analytics or other strategic initiatives.

 

Related Posts