Data Warehouse
   HOME

TheInfoList



OR:

In
computing Computing is any goal-oriented activity requiring, benefiting from, or creating computer, computing machinery. It includes the study and experimentation of algorithmic processes, and the development of both computer hardware, hardware and softw ...
, a data warehouse (DW or DWH), also known as an enterprise data warehouse (EDW), is a system used for reporting and
data analysis Data analysis is the process of inspecting, Data cleansing, cleansing, Data transformation, transforming, and Data modeling, modeling data with the goal of discovering useful information, informing conclusions, and supporting decision-making. Da ...
and is a core component of
business intelligence Business intelligence (BI) consists of strategies, methodologies, and technologies used by enterprises for data analysis and management of business information. Common functions of BI technologies include Financial reporting, reporting, online an ...
. Data warehouses are central repositories of data integrated from disparate sources. They store current and historical data organized in a way that is optimized for data analysis, generation of reports, and developing insights across the integrated data. They are intended to be used by analysts and managers to help make organizational decisions. The data stored in the warehouse is
upload Uploading refers to ''transmitting'' data from one computer system to another through means of a network. Common methods of uploading include: uploading via web browsers, FTP clients, and terminals ( SCP/ SFTP). Uploading can be used in th ...
ed from operational systems (such as marketing or sales). The data may pass through an operational data store and may require
data cleansing Data cleansing or data cleaning is the process of identifying and correcting (or removing) corrupt, inaccurate, or irrelevant records from a dataset, table, or database. It involves detecting incomplete, incorrect, or inaccurate parts of the dat ...
for additional operations to ensure
data quality Data quality refers to the state of qualitative or quantitative pieces of information. There are many definitions of data quality, but data is generally considered high quality if it is "fit for tsintended uses in operations, decision making and ...
before it is used in the data warehouse for reporting. The two main workflows for building a data warehouse system are extract, transform, load (ETL) and extract, load, transform (ELT).


Components

The environment for data warehouses and marts includes the following: * Source systems of data (often, the company's operational databases, such as relational databases); * Data integration technology and processes to extract data from source systems, transform them, and load them into a data mart or warehouse; * Architectures to store data in the warehouse or marts; * Tools and applications for varied users; * Metadata, data quality, and governance processes. Metadata includes data sources (database, table, and column names), refresh schedules and data usage measures.


Related systems


Operational databases

Operational databases are optimized for the preservation of
data integrity Data integrity is the maintenance of, and the assurance of, data accuracy and consistency over its entire Information Lifecycle Management, life-cycle. It is a critical aspect to the design, implementation, and usage of any system that stores, proc ...
and speed of recording of business transactions through use of
database normalization Database normalization is the process of structuring a relational database in accordance with a series of so-called '' normal forms'' in order to reduce data redundancy and improve data integrity. It was first proposed by British computer scien ...
and an entity–relationship model. Operational system designers generally follow Codd's 12 rules of
database normalization Database normalization is the process of structuring a relational database in accordance with a series of so-called '' normal forms'' in order to reduce data redundancy and improve data integrity. It was first proposed by British computer scien ...
to ensure data integrity. Fully normalized database designs (that is, those satisfying all Codd rules) often result in information from a business transaction being stored in dozens to hundreds of tables.
Relational database A relational database (RDB) is a database based on the relational model of data, as proposed by E. F. Codd in 1970. A Relational Database Management System (RDBMS) is a type of database management system that stores data in a structured for ...
s are efficient at managing the relationships between these tables. The databases have very fast insert/update performance because only a small amount of data in those tables is affected by each transaction. To improve performance, older data are periodically purged. Data warehouses are optimized for analytic access patterns, which usually involve selecting specific fields rather than all fields as is common in operational databases. Because of these differences in access, operational databases (loosely, OLTP) benefit from the use of a row-oriented database management system (DBMS), whereas analytics databases (loosely, OLAP) benefit from the use of a column-oriented DBMS. Operational systems maintain a snapshot of the business, while warehouses maintain historic data through ETL processes that periodically migrate data from the operational systems to the warehouse.
Online analytical processing In computing, online analytical processing (OLAP) (), is an approach to quickly answer multi-dimensional analytical (MDA) queries. The term ''OLAP'' was created as a slight modification of the traditional database term online transaction proces ...
(OLAP) is characterized by a low rate of transactions and complex queries that involve aggregations. Response time is an effective performance measure of OLAP systems. OLAP applications are widely used for
data mining Data mining is the process of extracting and finding patterns in massive data sets involving methods at the intersection of machine learning, statistics, and database systems. Data mining is an interdisciplinary subfield of computer science and ...
. OLAP databases store aggregated, historical data in multi-dimensional schemas (usually star schemas). OLAP systems typically have a data latency of a few hours, while data mart latency is closer to one day. The OLAP approach is used to analyze multidimensional data from multiple sources and perspectives. The three basic operations in OLAP are roll-up (consolidation), drill-down, and slicing & dicing. Online transaction processing (OLTP) is characterized by a large numbers of short online transactions (INSERT, UPDATE, DELETE). OLTP systems emphasize fast query processing and maintaining
data integrity Data integrity is the maintenance of, and the assurance of, data accuracy and consistency over its entire Information Lifecycle Management, life-cycle. It is a critical aspect to the design, implementation, and usage of any system that stores, proc ...
in multi-access environments. For OLTP systems, performance is the number of transactions per second. OLTP databases contain detailed and current data. The schema used to store transactional databases is the entity model (usually 3NF). Normalization is the norm for data modeling techniques in this system.
Predictive analytics Predictive analytics encompasses a variety of Statistics, statistical techniques from data mining, Predictive modelling, predictive modeling, and machine learning that analyze current and historical facts to make predictions about future or other ...
is about finding and quantifying hidden patterns in the data using complex mathematical models to prepare for different future outcomes, including demand for products, and make better decisions. By contrast, OLAP focuses on historical data analysis and is reactive. Predictive systems are also used for
customer relationship management Customer relationship management (CRM) is a strategic process that organizations use to manage, analyze, and improve their interactions with customers. By leveraging data-driven insights, CRM helps businesses optimize communication, enhance cus ...
(CRM).


Data marts

A data mart is a simple data warehouse focused on a single subject or functional area. Hence it draws data from a limited number of sources such as sales, finance or marketing. Data marts are often built and controlled by a single department in an organization. The sources could be internal operational systems, a central data warehouse, or external data. As with warehouses, stored data is usually not normalized. Types of data marts include dependent, independent, and hybrid data marts.


Variants


ETL

The typical extract, transform, load (ETL)-based data warehouse uses staging,
data integration Data integration refers to the process of combining, sharing, or synchronizing data from multiple sources to provide users with a unified view. There are a wide range of possible applications for data integration, from commercial (such as when a ...
, and access layers to house its key functions. The staging layer or staging database stores raw data extracted from each of the disparate source data systems. The integration layer integrates disparate data sets by transforming the data from the staging layer, often storing this transformed data in an operational data store (ODS) database. The integrated data are then moved to yet another database, often called the data warehouse database, where the data is arranged into hierarchical groups, often called dimensions, and into facts and aggregate facts. The combination of facts and dimensions is sometimes called a star schema. The access layer helps users retrieve data. The main source of the data is cleansed, transformed, catalogued, and made available for use by managers and other business professionals for
data mining Data mining is the process of extracting and finding patterns in massive data sets involving methods at the intersection of machine learning, statistics, and database systems. Data mining is an interdisciplinary subfield of computer science and ...
,
online analytical processing In computing, online analytical processing (OLAP) (), is an approach to quickly answer multi-dimensional analytical (MDA) queries. The term ''OLAP'' was created as a slight modification of the traditional database term online transaction proces ...
,
market research Market research is an organized effort to gather information about target markets and customers. It involves understanding who they are and what they need. It is an important component of business strategy and a major factor in maintaining com ...
and
decision support A decision support system (DSS) is an information system that supports business or organizational decision-making activities. DSSs serve the management, operations and planning levels of an organization (usually mid and higher management) and ...
. However, the means to retrieve and analyze data, to extract, transform, and load data, and to manage the
data dictionary A data dictionary, or metadata repository, as defined in the ''IBM Dictionary of Computing'', is a "centralized repository of information about data such as meaning, relationships to other data, origin, usage, and format". ''Oracle Corporation, ...
are also considered essential components of a data warehousing system. Many references to data warehousing use this broader context. Thus, an expanded definition of data warehousing includes
business intelligence tools Business intelligence software is a type of application software designed to retrieve, analyze, transform and report data for business intelligence (BI). The applications generally read data that has been previously stored, often - though not nece ...
, tools to extract, transform, and load data into the repository, and tools to manage and retrieve
metadata Metadata (or metainformation) is "data that provides information about other data", but not the content of the data itself, such as the text of a message or the image itself. There are many distinct types of metadata, including: * Descriptive ...
.


ELT

ELT-based data warehousing gets rid of a separate ETL tool for data transformation. Instead, it maintains a staging area inside the data warehouse itself. In this approach, data gets extracted from heterogeneous source systems and are then directly loaded into the data warehouse, before any transformation occurs. All necessary transformations are then handled inside the data warehouse itself. Finally, the manipulated data gets loaded into target tables in the same data warehouse.


Benefits

A data warehouse maintains a copy of information from the source transaction systems. This architectural complexity provides the opportunity to: * Integrate data from multiple sources into a single database and data model. More congregation of data to single database so a single query engine can be used to present data in an operational data store. * Mitigate the problem of isolation-level lock contention in
transaction processing In computer science, transaction processing is information processing that is divided into individual, indivisible operations called ''transactions''. Each transaction must succeed or fail as a complete unit; it can never be only partially c ...
systems caused by long-running analysis queries in transaction processing databases. * Maintain data history, even if the source transaction systems do not. * Integrate data from multiple source systems, enabling a central view across the enterprise. This benefit is always valuable, but particularly so when the organization grows via merging. * Improve
data quality Data quality refers to the state of qualitative or quantitative pieces of information. There are many definitions of data quality, but data is generally considered high quality if it is "fit for tsintended uses in operations, decision making and ...
, by providing consistent codes and descriptions, flagging or even fixing bad data. * Present the organization's information consistently. * Provide a single common data model for all data of interest regardless of data source. * Restructure the data so that it makes sense to the business users. * Restructure the data so that it delivers excellent query performance, even for complex analytic queries, without impacting the operational systems. * Add value to operational business applications, notably
customer relationship management Customer relationship management (CRM) is a strategic process that organizations use to manage, analyze, and improve their interactions with customers. By leveraging data-driven insights, CRM helps businesses optimize communication, enhance cus ...
(CRM) systems. * Make decision–support queries easier to write. * Organize and disambiguate repetitive data.


History

The concept of data warehousing dates back to the late 1980s when IBM researchers Barry Devlin and Paul Murphy developed the "business data warehouse". In essence, the data warehousing concept was intended to provide an architectural model for the flow of data from operational systems to decision support environments. The concept attempted to address the various problems associated with this flow, mainly the high costs associated with it. In the absence of a data warehousing architecture, an enormous amount of redundancy was required to support multiple decision support environments. In larger corporations, it was typical for multiple decision support environments to operate independently. Though each environment served different users, they often required much of the same stored data. The process of gathering, cleaning and integrating data from various sources, usually from long-term existing operational systems (usually referred to as
legacy system Legacy or Legacies may refer to: Arts and entertainment Comics * " Batman: Legacy", a 1996 Batman storyline * '' DC Universe: Legacies'', a comic book series from DC Comics * ''Legacy'', a 1999 quarterly series from Antarctic Press * ''Legacy ...
s), was typically in part replicated for each environment. Moreover, the operational systems were frequently reexamined as new decision support requirements emerged. Often new requirements necessitated gathering, cleaning and integrating new data from " data marts" that was tailored for ready access by users. Additionally, with the publication of The IRM Imperative (Wiley & Sons, 1991) by James M. Kerr, the idea of managing and putting a dollar value on an organization's data resources and then reporting that value as an asset on a balance sheet became popular. In the book, Kerr described a way to populate subject-area databases from data derived from transaction-driven systems to create a storage area where summary data could be further leveraged to inform executive decision-making. This concept served to promote further thinking of how a data warehouse could be developed and managed in a practical way within any enterprise. Key developments in early years of data warehousing: * 1960s –
General Mills General Mills, Inc. is an American multinational corporation, multinational manufacturer and marketer of branded ultra-processed consumer foods sold through retail stores. Founded on the banks of the Mississippi River at Saint Anthony Falls in ...
and
Dartmouth College Dartmouth College ( ) is a Private university, private Ivy League research university in Hanover, New Hampshire, United States. Established in 1769 by Eleazar Wheelock, Dartmouth is one of the nine colonial colleges chartered before the America ...
, in a joint research project, develop the terms ''dimensions'' and ''facts''.Kimball 2013, pg. 15 * 1970s – ACNielsen and IRI provide dimensional data marts for retail sales. * 1970s – Bill Inmon begins to define and discuss the term Data Warehouse. * 1975 – Sperry Univac introduces
MAPPER MAPPER Systems, now known as Business Information Server, BIS, is a 4GL, fourth-generation programming language originally from Sperry Univac. Now owned by Unisys Corporation. Mapper originated in the 1970s based on some work in the 1960s, It ha ...
(MAintain, Prepare, and Produce Executive Reports), a database management and reporting system that includes the world's first 4GL. It is the first platform designed for building Information Centers (a forerunner of contemporary data warehouse technology). * 1983 –
Teradata Teradata Corporation is an American software company that provides cloud database and Analytics, analytics-related software, products, and services. The company was formed in 1979 in Brentwood, California, as a collaboration between researchers a ...
introduces the DBC/1012 database computer specifically designed for decision support. * 1984 – Metaphor Computer Systems, founded by David Liddle and Don Massaro, releases a hardware/software package and GUI for business users to create a database management and analytic system. * 1988 – Barry Devlin and Paul Murphy publish the article "An architecture for a business and information system" where they introduce the term "business data warehouse". * 1990 – Red Brick Systems, founded by
Ralph Kimball Ralph Kimball (born July 18, 1944) is an author on the subject of data warehousing and business intelligence. He is one of the original architects of data warehousing and is known for long-term convictions that data warehouses must be designed to ...
, introduces Red Brick Warehouse, a database management system specifically for data warehousing. * 1991 – James M. Kerr authors The IRM Imperative, which suggests data resources could be reported as an asset on a balance sheet, furthering commercial interest in the establishment of data warehouses. * 1991 – Prism Solutions, founded by Bill Inmon, introduces Prism Warehouse Manager, software for developing a data warehouse. * 1992 – Bill Inmon publishes the book ''Building the Data Warehouse''. * 1995 – The Data Warehousing Institute, a for-profit organization that promotes data warehousing, is founded. * 1996 –
Ralph Kimball Ralph Kimball (born July 18, 1944) is an author on the subject of data warehousing and business intelligence. He is one of the original architects of data warehousing and is known for long-term convictions that data warehouses must be designed to ...
publishes the book ''The Data Warehouse Toolkit''. * 1998 – Focal modeling is implemented as an ensemble (hybrid) data warehouse modeling approach, with Patrik Lager as one of the main drivers. * 2000 – Dan Linstedt releases in the public domain the Data vault modeling, conceived in 1990 as an alternative to Inmon and Kimball to provide long-term historical storage of data coming in from multiple operational systems, with emphasis on tracing, auditing and resilience to change of the source data model. * 2008 – Bill Inmon, along with Derek Strauss and Genia Neushloss, publishes "DW 2.0: The Architecture for the Next Generation of Data Warehousing", explaining his top-down approach to data warehousing and coining the term, data-warehousing 2.0. * 2008 – Anchor modeling was formalized in a paper presented at the International Conference on Conceptual Modeling, and won the best paper award * 2012 – Bill Inmon develops and makes public technology known as "textual disambiguation". Textual disambiguation applies context to raw text and reformats the raw text and context into a standard data base format. Once raw text is passed through textual disambiguation, it can easily and efficiently be accessed and analyzed by standard business intelligence technology. Textual disambiguation is accomplished through the execution of textual ETL. Textual disambiguation is useful wherever raw text is found, such as in documents, Hadoop, email, and so forth. * 2013 – Data vault 2.0 was released, having some minor changes to the modeling method, as well as integration with best practices from other methodologies, architectures and implementations including agile and CMMI principles


Data organization


Facts

A fact is a value or measurement in the system being managed. Raw facts are ones reported by the reporting entity. For example, in a mobile telephone system, if a
base transceiver station A base transceiver station (BTS) or a baseband unit (BBU) is a piece of equipment that facilitates wireless communication between user equipment (UE) and a network. UEs are devices like mobile phone A mobile phone or cell phone is a portab ...
(BTS) receives 1,000 requests for traffic channel allocation, allocates for 820, and rejects the rest, it could report three facts to a management system: * * * Raw facts are aggregated to higher levels in various dimensions to extract information more relevant to the service or business. These are called aggregated facts or summaries. For example, if there are three BTSs in a city, then the facts above can be aggregated to the city level in the network dimension. For example: * *


Dimensional versus normalized approach for storage of data

The two most important approaches to store data in a warehouse are dimensional and normalized. The dimensional approach uses a star schema as proposed by
Ralph Kimball Ralph Kimball (born July 18, 1944) is an author on the subject of data warehousing and business intelligence. He is one of the original architects of data warehousing and is known for long-term convictions that data warehouses must be designed to ...
. The normalized approach, also called the third normal form (3NF) is an entity-relational normalized model proposed by Bill Inmon.


Dimensional approach

In a dimensional approach, transaction data is partitioned into "facts", which are usually numeric transaction data, and " dimensions", which are the reference information that gives context to the facts. For example, a sales transaction can be broken up into facts such as the number of products ordered and the total price paid for the products, and into dimensions such as order date, customer name, product number, order ship-to and bill-to locations, and salesperson responsible for receiving the order. This dimensional approach makes data easier to understand and speeds up data retrieval. Dimensional structures are easy for business users to understand because the structure is divided into measurements/facts and context/dimensions. Facts are related to the organization's business processes and operational system, and dimensions are the context about them (Kimball, Ralph 2008). Another advantage is that the dimensional model does not involve a relational database every time. Thus, this type of modeling technique is very useful for end-user queries in data warehouse. The model of facts and dimensions can also be understood as a
data cube In computer programming contexts, a data cube (or datacube) is a multi-dimensional ("n-D") array of values. Typically, the term data cube is applied in contexts where these arrays are massively larger than the hosting computer's main memory; exa ...
, where dimensions are the categorical coordinates in a multi-dimensional cube, the fact is a value corresponding to the coordinates. The main disadvantages of the dimensional approach are: # It is complicated to maintain the integrity of facts and dimensions, loading the data warehouse with data from different operational systems # It is difficult to modify the warehouse structure if the organization changes the way it does business.


Normalized approach

In the normalized approach, the data in the warehouse are stored following, to a degree,
database normalization Database normalization is the process of structuring a relational database in accordance with a series of so-called '' normal forms'' in order to reduce data redundancy and improve data integrity. It was first proposed by British computer scien ...
rules. Normalized relational database tables are grouped into ''subject areas'' (for example, customers, products and finance). When used in large enterprises, the result is dozens of tables linked by a web of joins.(Kimball, Ralph 2008). The main advantage of this approach is that it is straightforward to add information into the database. Disadvantages include that, because of the large number of tables, it can be difficult for users to join data from different sources into meaningful information and access the information without a precise understanding of the date sources and the
data structure In computer science, a data structure is a data organization and storage format that is usually chosen for Efficiency, efficient Data access, access to data. More precisely, a data structure is a collection of data values, the relationships amo ...
of the data warehouse. Both normalized and dimensional models can be represented in entity–relationship diagrams because both contain joined relational tables. The difference between them is the degree of normalization. These approaches are not mutually exclusive, and there are other approaches. Dimensional approaches can involve normalizing data to a degree (Kimball, Ralph 2008). In ''Information-Driven Business'', Robert Hillard compares the two approaches based on the information needs of the business problem. He concludes that normalized models hold far more information than their dimensional equivalents (even when the same fields are used in both models) but at the cost of usability. The technique measures information quantity in terms of
information entropy In information theory, the entropy of a random variable quantifies the average level of uncertainty or information associated with the variable's potential states or possible outcomes. This measures the expected amount of information needed ...
and usability in terms of the Small Worlds data transformation measure.


Design methods


Bottom-up design

In the ''bottom-up'' approach, data marts are first created to provide reporting and analytical capabilities for specific
business process A business process, business method, or business function is a collection of related, structured activities or tasks performed by people or equipment in which a specific sequence produces a service or product (that serves a particular business g ...
es. These data marts can then be integrated to create a comprehensive data warehouse. The data warehouse bus architecture is primarily an implementation of "the bus", a collection of conformed dimensions and conformed facts, which are dimensions that are shared (in a specific way) between facts in two or more data marts.


Top-down design

The ''top-down'' approach is designed using a normalized enterprise data model. "Atomic" data, that is, data at the greatest level of detail, are stored in the data warehouse. Dimensional data marts containing data needed for specific business processes or specific departments are created from the data warehouse.Gartner, Of Data Warehouses, Operational Data Stores, Data Marts and Data Outhouses, Dec 2005


Hybrid design

Data warehouses often resemble the hub and spokes architecture.
Legacy system Legacy or Legacies may refer to: Arts and entertainment Comics * " Batman: Legacy", a 1996 Batman storyline * '' DC Universe: Legacies'', a comic book series from DC Comics * ''Legacy'', a 1999 quarterly series from Antarctic Press * ''Legacy ...
s feeding the warehouse often include
customer relationship management Customer relationship management (CRM) is a strategic process that organizations use to manage, analyze, and improve their interactions with customers. By leveraging data-driven insights, CRM helps businesses optimize communication, enhance cus ...
and
enterprise resource planning Enterprise resource planning (ERP) is the integrated management of main business processes, often in real time and mediated by software and technology. ERP is usually referred to as a category of business management software—typically a suit ...
, generating large amounts of data. To consolidate these various data models, and facilitate the extract transform load process, data warehouses often make use of an operational data store, the information from which is parsed into the actual data warehouse. To reduce data redundancy, larger systems often store the data in a normalized way. Data marts for specific reports can then be built on top of the data warehouse. A hybrid (also called ensemble) data warehouse database is kept on third normal form to eliminate
data redundancy In computer main memory, auxiliary storage and computer buses, data redundancy is the existence of data that is additional to the actual data and permits correction of errors in stored or transmitted data. The additional data can simply be a com ...
. A normal relational database, however, is not efficient for business intelligence reports where dimensional modelling is prevalent. Small data marts can shop for data from the consolidated warehouse and use the filtered, specific data for the fact tables and dimensions required. The data warehouse provides a single source of information from which the data marts can read, providing a wide range of business information. The hybrid architecture allows a data warehouse to be replaced with a
master data management Master data management (MDM) is a discipline in which business and information technology collaborate to ensure the uniformity, accuracy, stewardship, semantic consistency, and accountability of the enterprise's official shared master data assets. ...
repository where operational (not static) information could reside. The data vault modeling components follow hub and spokes architecture. This modeling style is a hybrid design, consisting of the best practices from both third normal form and star schema. The data vault model is not a true third normal form, and breaks some of its rules, but it is a top-down architecture with a bottom up design. The data vault model is geared to be strictly a data warehouse. It is not geared to be end-user accessible, which, when built, still requires the use of a data mart or star schema-based release area for business purposes.


Characteristics

There are basic features that define the data in the data warehouse that include subject orientation, data integration, time-variant, nonvolatile data, and data granularity.


Subject-oriented

Unlike the operational systems, the data in the data warehouse revolves around the subjects of the enterprise. Subject orientation is not
database normalization Database normalization is the process of structuring a relational database in accordance with a series of so-called '' normal forms'' in order to reduce data redundancy and improve data integrity. It was first proposed by British computer scien ...
. Subject orientation can be really useful for decision-making. Gathering the required objects is called subject-oriented.


Integrated

The data found within the data warehouse is integrated. Since it comes from several operational systems, all inconsistencies must be removed. Consistencies include naming conventions, measurement of variables, encoding structures, physical attributes of data, and so forth.


Time-variant

While operational systems reflect current values as they support day-to-day operations, data warehouse data represents a long time horizon (up to 10 years) which means it stores mostly historical data. It is mainly meant for data mining and forecasting. (E.g. if a user is searching for a buying pattern of a specific customer, the user needs to look at data on the current and past purchases.)


Nonvolatile

The data in the data warehouse is read-only, which means it cannot be updated, created, or deleted (unless there is a regulatory or statutory obligation to do so).


Options


Aggregation

In the data warehouse process, data can be aggregated in data marts at different levels of abstraction. The user may start looking at the total sale units of a product in an entire region. Then the user looks at the states in that region. Finally, they may examine the individual stores in a certain state. Therefore, typically, the analysis starts at a higher level and drills down to lower levels of details.


Virtualization

With data virtualization, the data used remains in its original locations and real-time access is established to allow analytics across multiple sources creating a virtual data warehouse. This can aid in resolving some technical difficulties such as compatibility problems when combining data from various platforms, lowering the risk of error caused by faulty data, and guaranteeing that the newest data is used. Furthermore, avoiding the creation of a new database containing personal information can make it easier to comply with privacy regulations. However, with data virtualization, the connection to all necessary data sources must be operational as there is no local copy of the data, which is one of the main drawbacks of the approach.


Architecture

The different methods used to construct/organize a data warehouse specified by an organization are numerous. The hardware utilized, software created and data resources specifically required for the correct functionality of a data warehouse are the main components of the data warehouse architecture. All data warehouses have multiple phases in which the requirements of the organization are modified and fine-tuned.


Evolution in organization use

These terms refer to the level of sophistication of a data warehouse: ; Offline operational data warehouse: Data warehouses in this stage of evolution are updated on a regular time cycle (usually daily, weekly or monthly) from the operational systems and the data is stored in an integrated reporting-oriented database. ; Offline data warehouse: Data warehouses at this stage are updated from data in the operational systems on a regular basis and the data warehouse data are stored in a data structure designed to facilitate reporting. ; On-time data warehouse: Online Integrated Data Warehousing represent the real-time Data warehouses stage data in the warehouse is updated for every transaction performed on the source data ; Integrated data warehouse: These data warehouses assemble data from different areas of business, so users can look up the information they need across other systems.


In healthcare

In the
healthcare Health care, or healthcare, is the improvement or maintenance of health via the preventive healthcare, prevention, diagnosis, therapy, treatment, wikt:amelioration, amelioration or cure of disease, illness, injury, and other disability, physic ...
sector, data warehouses are critical components of
health informatics Health informatics combines communications, information technology (IT), and health care to enhance patient care and is at the forefront of the medical technological revolution. It can be viewed as a branch of engineering and applied science. ...
, enabling the integration, storage, and analysis of large volumes of clinical, administrative, and operational data. These systems consolidate information from disparate sources such as
electronic health record An electronic health record (EHR) is the systematized collection of electronically stored patient and population health information in a digital format. These records can be shared across different health care settings. Records are shared thro ...
s (EHRs), laboratory information systems, picture archiving and communication systems (PACS), and medical billing platforms. By centralizing data, healthcare data warehouses support a range of functions including population health, clinical decision support, quality improvement, public health surveillance, and
medical research Medical research (or biomedical research), also known as health research, refers to the process of using scientific methods with the aim to produce knowledge about human diseases, the prevention and treatment of illness, and the promotion of ...
. Healthcare data warehouses often incorporate specialized data models that account for the complexity and sensitivity of medical data, such as temporal information (e.g., longitudinal patient histories), coded terminologies (e.g.,
ICD-10 ICD-10 is the 10th revision of the International Classification of Diseases (ICD), a medical classification list by the World Health Organization (WHO). It contains codes for diseases, signs and symptoms, abnormal findings, complaints, social cir ...
, SNOMED CT), and compliance with privacy regulations (e.g., HIPAA in the United States or GDPR in the European Union). Following is a list of major patient data warehouses with broad scope (not disease- or specialty-specific), with variables including laboratory results, pharmacy, age, race, socioeconomic status, comorbidities and longitudinal changes: These warehouses enable data-driven healthcare by supporting retrospective studies, comparative effectiveness research, and
predictive analytics Predictive analytics encompasses a variety of Statistics, statistical techniques from data mining, Predictive modelling, predictive modeling, and machine learning that analyze current and historical facts to make predictions about future or other ...
, often with the use of healthcare-applied artificial intelligence.


See also

* List of business intelligence software * *


References


Further reading

* Davenport, Thomas H. and Harris, Jeanne G. ''Competing on Analytics: The New Science of Winning'' (2007) Harvard Business School Press. * Ganczarski, Joe. ''Data Warehouse Implementations: Critical Implementation Factors Study'' (2009)
VDM Verlag Omniscriptum Publishing Group, formerly known as VDM Verlag Dr. Müller, is a German publishing group headquartered in Riga, Latvia. Founded in 2002 in Düsseldorf, its book production is based on print-to-order technology. The company pub ...
* Kimball, Ralph and Ross, Margy. ''The Data Warehouse Toolkit'' Third Edition (2013) Wiley, * Linstedt, Graziano, Hultgren. ''The Business of Data Vault Modeling'' Second Edition (2010) Dan linstedt, * William Inmon. ''Building the Data Warehouse'' (2005) John Wiley and Sons, {{DEFAULTSORT:Data Warehouse Data engineering