HOME

TheInfoList



OR:

Data cleansing or data cleaning is the process of identifying and correcting (or removing) corrupt, inaccurate, or irrelevant records from a dataset,
table Table may refer to: * Table (database), how the table data arrangement is used within the databases * Table (furniture), a piece of furniture with a flat surface and one or more legs * Table (information), a data arrangement with rows and column ...
, or
database In computing, a database is an organized collection of data or a type of data store based on the use of a database management system (DBMS), the software that interacts with end users, applications, and the database itself to capture and a ...
. It involves detecting incomplete, incorrect, or inaccurate parts of the data and then replacing, modifying, or deleting the affected data. Data cleansing can be performed interactively using data wrangling tools, or through
batch processing Computerized batch processing is a method of running software programs called jobs in batches automatically. While users are required to submit the jobs, no other interaction by the user is required to process the batch. Batches may automatically ...
often via scripts or a data quality firewall. After cleansing, a
data set A data set (or dataset) is a collection of data. In the case of tabular data, a data set corresponds to one or more table (database), database tables, where every column (database), column of a table represents a particular Variable (computer sci ...
should be consistent with other similar data sets in the system. The inconsistencies detected or removed may have been originally caused by user entry errors, by corruption in transmission or storage, or by different
data dictionary A data dictionary, or metadata repository, as defined in the ''IBM Dictionary of Computing'', is a "centralized repository of information about data such as meaning, relationships to other data, origin, usage, and format". ''Oracle Corporation, ...
definitions of similar entities in different stores. Data cleaning differs from
data validation In computing, data validation or input validation is the process of ensuring data has undergone data cleansing to confirm it has data quality, that is, that it is both correct and useful. It uses routines, often called "validation rules", "valida ...
in that validation almost invariably means data is rejected from the system at entry and is performed at the time of entry, rather than on batches of data. The actual process of data cleansing may involve removing
typographical error A typographical error (often shortened to typo), also called a misprint, is a mistake (such as a spelling or transposition error) made in the typing of printed or electronic material. Historically, this referred to mistakes in manual typesettin ...
s or validating and correcting values against a known list of entities. The validation may be strict (such as rejecting any address that does not have a valid
postal code A postal code (also known locally in various English-speaking countries throughout the world as a postcode, post code, PIN or ZIP Code) is a series of letters or numerical digit, digits or both, sometimes including spaces or punctuation, inclu ...
), or with fuzzy or
approximate string matching In computer science, approximate string matching (often colloquially referred to as fuzzy string searching) is the technique of finding strings that match a pattern approximately (rather than exactly). The problem of approximate string matching ...
(such as correcting records that partially match existing, known records). Some data cleansing solutions will clean data by cross-checking with a validated data set. A common data cleansing practice is data enhancement, where data is made more complete by adding related information. For example, appending addresses with any phone numbers related to that address. Data cleansing may also involve
harmonization In music, harmonization is the chordal accompaniment to a line or melody: "Using chords and melodies together, making harmony by stacking scale tones as triads". A harmonized scale can be created by using each note of a musical scale as a r ...
(or normalization) of data, which is the process of bringing together data of "varying file formats, naming conventions, and columns", and transforming it into one cohesive data set; a simple example is the expansion of abbreviations ("st, rd, etc." to "street, road, etcetera").


Motivation

Administratively incorrect, inconsistent data can lead to false conclusions and misdirect
investment Investment is traditionally defined as the "commitment of resources into something expected to gain value over time". If an investment involves money, then it can be defined as a "commitment of money to receive more money later". From a broade ...
s on both public and private scales. For instance, the
government A government is the system or group of people governing an organized community, generally a State (polity), state. In the case of its broad associative definition, government normally consists of legislature, executive (government), execu ...
may want to analyze population census figures to decide which regions require further spending and investment on
infrastructure Infrastructure is the set of facilities and systems that serve a country, city, or other area, and encompasses the services and facilities necessary for its economy, households and firms to function. Infrastructure is composed of public and pri ...
and services. In this case, it will be important to have access to reliable data to avoid erroneous fiscal decisions. In the business world, incorrect data can be costly. Many companies use customer information
database In computing, a database is an organized collection of data or a type of data store based on the use of a database management system (DBMS), the software that interacts with end users, applications, and the database itself to capture and a ...
s that record data like contact information, addresses, and preferences. For instance, if the addresses are inconsistent, the company will suffer the cost of resending mail or even losing customers.


Data quality

High-quality data needs to pass a set of quality criteria. Those include: * Validity: The degree to which the measures conform to defined business rules or constraints. ''(See also
Validity (statistics) Validity is the main extent to which a concept, conclusion, or measurement is well-founded and likely corresponds accurately to the real world. The word "valid" is derived from the Latin validus, meaning strong. The validity of a measurement tool ...
.)'' When modern database technology is used to design data-capture systems, validity is fairly easy to ensure: invalid data arises mainly in legacy contexts (where constraints were not implemented in software) or where inappropriate data-capture technology was used (e.g., spreadsheets, where it is very hard to limit what a user chooses to enter into a cell, if cell validation is not used). Data constraints fall into the following categories: ** ''Data-Type Constraints'': values in a particular column must be of a particular data type, e.g., Boolean, numeric (integer or real), date. ** ''Range Constraints:'' typically, numbers or dates should fall within a certain range. That is, they have minimum and/or maximum permissible values. ** ''Mandatory Constraints:'' Certain columns must not be empty. ** ''Unique Constraints:'' A field, or a combination of fields, must be unique across a dataset. No two persons may have the same social security number (or other unique identifier). ** ''Set-Membership constraints'': The values for a column come from a set of discrete values or codes. For example, a person's sex may be Female, Male, or Non-Binary. ** ''Foreign-key constraints'': This is the more general case of set membership. The set of values in a column is defined in a column of another table that contains unique values. For example, in a US taxpayer database, the "state" column is required to belong to one of the US's defined states or territories: the set of permissible states and territories is recorded in a separate table. The term
foreign key A foreign key is a set of attributes in a table that refers to the primary key of another table, linking these two tables. In the context of relational databases, a foreign key is subject to an inclusion dependency constraint that the tuples ...
is borrowed from relational database terminology. ** ''Regular expression patterns'': Occasionally, text fields must be validated this way. For example, North American phone numbers may be required to have the pattern 999-999–9999. ** ''Cross-field validation'': Certain conditions that utilize multiple fields must hold. For example, in laboratory medicine, the sum of the components of the differential white blood cell count must be equal to 100 (since they are all percentages). In a hospital database, a patient's date of discharge from the hospital cannot be earlier than the date of admission. * Accuracy: The degree of conformity of a measure to a standard or a true value. ''(See also
Accuracy and precision Accuracy and precision are two measures of ''observational error''. ''Accuracy'' is how close a given set of measurements (observations or readings) are to their ''true value''. ''Precision'' is how close the measurements are to each other. The ...
.)'' Accuracy is very hard to achieve through data cleansing in the general case because it requires accessing an external source of data that contains the true value: such "gold standard" data is often unavailable. Accuracy has been achieved in some cleansing contexts, notably customer contact data, by using external databases that match up zip codes to geographical locations (city and state) and also help verify that street addresses within these zip codes actually exist. * Completeness: The degree to which all required measures are known. Incompleteness is almost impossible to fix with data cleansing methodology: one cannot infer facts that were not captured when the data in question was initially recorded. (In some contexts, e.g., interview data, it may be possible to fix incompleteness by going back to the original source of data, i.e. re-interviewing the subject, but even this does not guarantee success because of problems of recall - e.g., in an interview to gather data on food consumption, no one is likely to remember exactly what one ate six months ago. In the case of systems that insist certain columns should not be empty, one may work around the problem by designating a value that indicates "unknown" or "missing", but the supplying of default values does not imply that the data has been made complete.) * Consistency: The degree to which a set of measures are equivalent across systems. ''(See also
Consistency In deductive logic, a consistent theory is one that does not lead to a logical contradiction. A theory T is consistent if there is no formula \varphi such that both \varphi and its negation \lnot\varphi are elements of the set of consequences ...
).'' Inconsistency occurs when two data items in the data set contradict each other: e.g., a customer is recorded in two different systems as having two different current addresses, and only one of them can be correct. Fixing inconsistency is not always possible: it requires a variety of strategies - e.g., deciding which data were recorded more recently, which data source is likely to be most reliable (the latter knowledge may be specific to a given organization), or simply trying to find the truth by testing both data items (e.g., calling up the customer). * Uniformity: The degree to which a set of data measures are specified using the same units of measure in all systems. ''(See also
Unit of measurement A unit of measurement, or unit of measure, is a definite magnitude (mathematics), magnitude of a quantity, defined and adopted by convention or by law, that is used as a standard for measurement of the same kind of quantity. Any other qua ...
.)'' In datasets pooled from different locales, weight may be recorded either in pounds or kilos and must be converted to a single measure using an arithmetic transformation. The term integrity encompasses accuracy, consistency and some aspects of validation ''(see also
Data integrity Data integrity is the maintenance of, and the assurance of, data accuracy and consistency over its entire Information Lifecycle Management, life-cycle. It is a critical aspect to the design, implementation, and usage of any system that stores, proc ...
)'' but is rarely used by itself in data-cleansing contexts because it is insufficiently specific. (For example, "
referential integrity Referential integrity is a property of data stating that all its references are valid. In the context of relational databases, it requires that if a value of one attribute (column) of a relation (table) references a value of another attribute (e ...
" is a term used to refer to the enforcement of foreign-key constraints above.)


Process

* Data auditing: The data is audited with the use of
statistical Statistics (from German language, German: ', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a s ...
and database methods to detect anomalies and contradictions: this eventually indicates the characteristics of the anomalies and their locations. Several commercial software packages will let you specify constraints of various kinds (using a grammar that conforms to that of a standard programming language, e.g., JavaScript or Visual Basic) and then generate code that checks the data for violation of these constraints. This process is referred to below in the bullets "workflow specification" and "workflow execution." For users who lack access to high-end cleansing software, Microcomputer database packages such as Microsoft Access or File Maker Pro will also let you perform such checks, on a constraint-by-constraint basis, interactively with little or no programming required in many cases. * Workflow specification: The detection and removal of anomalies are performed by a sequence of operations on the data known as the workflow. It is specified after the process of auditing the data and is crucial in achieving the end product of high-quality data. In order to achieve a proper workflow, the causes of the anomalies and errors in the data have to be closely considered. * Workflow execution: In this stage, the workflow is executed after its specification is complete and its correctness is verified. The implementation of the workflow should be efficient, even on large sets of data, which inevitably poses a trade-off because the execution of a data-cleansing operation can be computationally expensive. * Post-processing and controlling: After executing the cleansing workflow, the results are inspected to verify correctness. Data that could not be corrected during the execution of the workflow is manually corrected, if possible. The result is a new cycle in the data-cleansing process where the data is audited again to allow the specification of an additional workflow to further cleanse the data by automatic processing. Good quality source data has to do with “Data Quality Culture” and must be initiated at the top of the organization. It is not just a matter of implementing strong validation checks on input screens, because almost no matter how strong these checks are, they can often still be circumvented by the users. There is a nine-step guide for organizations that wish to improve data quality:Olson, J. E. ''Data Quality: The Accuracy Dimension",
Morgan Kaufmann Morgan Kaufmann Publishers is a Burlington, Massachusetts (San Francisco, California until 2008) based publisher specializing in computer science and engineering content. Since 1984, Morgan Kaufmann has been publishing contents on information te ...
, 2002. ''
* Declare a high-level commitment to a
data quality Data quality refers to the state of qualitative or quantitative pieces of information. There are many definitions of data quality, but data is generally considered high quality if it is "fit for tsintended uses in operations, decision making and ...
culture * Drive process reengineering at the executive level * Spend money to improve the data entry environment * Spend money to improve application integration * Spend money to change how processes work * Promote end-to-end team awareness * Promote interdepartmental cooperation * Publicly celebrate data quality excellence * Continuously measure and improve data quality Others include: * Parsing: for the detection of syntax errors. A parser decides whether a string of data is acceptable within the allowed data specification. This is similar to the way a parser works with
grammar In linguistics, grammar is the set of rules for how a natural language is structured, as demonstrated by its speakers or writers. Grammar rules may concern the use of clauses, phrases, and words. The term may also refer to the study of such rul ...
s and
language Language is a structured system of communication that consists of grammar and vocabulary. It is the primary means by which humans convey meaning, both in spoken and signed language, signed forms, and may also be conveyed through writing syste ...
s. * Data transformation: Data transformation allows the mapping of the data from its given format into the format expected by the appropriate application. This includes value conversions or translation functions, as well as normalizing numeric values to conform to minimum and maximum values. * Duplicate elimination: Duplicate detection requires an
algorithm In mathematics and computer science, an algorithm () is a finite sequence of Rigour#Mathematics, mathematically rigorous instructions, typically used to solve a class of specific Computational problem, problems or to perform a computation. Algo ...
for determining whether data contains duplicate representations of the same entity. Usually, data is sorted by a key that would bring duplicate entries closer together for faster identification. * Statistical methods: By analyzing the data using the values of
mean A mean is a quantity representing the "center" of a collection of numbers and is intermediate to the extreme values of the set of numbers. There are several kinds of means (or "measures of central tendency") in mathematics, especially in statist ...
,
standard deviation In statistics, the standard deviation is a measure of the amount of variation of the values of a variable about its Expected value, mean. A low standard Deviation (statistics), deviation indicates that the values tend to be close to the mean ( ...
,
range Range may refer to: Geography * Range (geographic), a chain of hills or mountains; a somewhat linear, complex mountainous or hilly area (cordillera, sierra) ** Mountain range, a group of mountains bordered by lowlands * Range, a term used to i ...
, or clustering algorithms, it is possible for an expert to find values that are unexpected and thus erroneous. Although the correction of such data is difficult since the true value is not known, it can be resolved by setting the values to an average or other statistical value. Statistical methods can also be used to handle missing values which can be replaced by one or more plausible values, which are usually obtained by extensive data augmentation algorithms.


System

The essential job of this system is to find a balance between fixing dirty data and maintaining the data as close as possible to the original data from the source production system. This is a challenge for the
extract, transform, load Extract, transform, load (ETL) is a three-phase computing process where data is ''extracted'' from an input source, ''transformed'' (including cleaning), and ''loaded'' into an output data container. The data can be collected from one or mor ...
architect. The system should offer an architecture that can cleanse data, record quality events and measure/control the quality of data in the
data warehouse In computing, a data warehouse (DW or DWH), also known as an enterprise data warehouse (EDW), is a system used for Business intelligence, reporting and data analysis and is a core component of business intelligence. Data warehouses are central Re ...
. A good start is to perform a thorough
data profiling Data profiling is the process of examining the data available from an existing information source (e.g. a database or a file) and collecting statistics or informative summaries about that data. The purpose of these statistics may be to: # Find ou ...
analysis that will help define the required complexity of the data cleansing system and also give an idea of the current data quality in the source system(s).


Quality screens

Part of the data cleansing system is a set of diagnostic filters known as quality screens. They each implement a test in the data flow that, if it fails, records an error in the Error Event Schema. Quality screens are divided into three categories: * Column screens. Testing the individual column, e.g. for unexpected values like
NULL Null may refer to: Science, technology, and mathematics Astronomy *Nuller, an optical tool using interferometry to block certain sources of light Computing *Null (SQL) (or NULL), a special marker and keyword in SQL indicating that a data value do ...
values; non-numeric values that should be numeric; out-of-range values; etc. * Structure screens. These are used to test for the integrity of different relationships between columns (typically foreign/primary keys) in the same or different tables. They are also used for testing that a group of columns is valid according to some structural definition to which it should adhere. * Business rule screens. The most complex of the three tests. They test to see whether data, maybe across multiple tables, follow specific business rules. An example could be, that if a customer is marked as a certain type of customer, the business rules that define this kind of customer should be adhered to. When a quality screen records an error, it can either stop the dataflow process, send the faulty data somewhere else than the target system or tag the data. The latter option is considered the best solution because the first option requires, that someone has to manually deal with the issue each time it occurs and the second implies that data are missing from the target system (
integrity Integrity is the quality of being honest and having a consistent and uncompromising adherence to strong moral and ethical principles and values. In ethics, integrity is regarded as the honesty and Honesty, truthfulness or of one's actions. Integr ...
) and it is often unclear what should happen to these data.


Criticism of existing tools and processes

Most data cleansing tools have limitations in usability: * Project costs: costs typically in the hundreds of thousands of dollars * Time: mastering large-scale data-cleansing software is time-consuming * Security: cross-validation requires sharing information, giving application access across systems, including sensitive legacy systems


Error event schema

The error event schema holds records of all error events thrown by the quality screens. It consists of an error event
fact table In data warehousing, a fact table consists of the measurements, metrics or Fact (data warehouse), facts of a business process. It is located at the center of a star schema or a snowflake schema surrounded by dimension tables. Where multiple fact t ...
with
foreign key A foreign key is a set of attributes in a table that refers to the primary key of another table, linking these two tables. In the context of relational databases, a foreign key is subject to an inclusion dependency constraint that the tuples ...
s to three dimension tables that represent a date (when),
batch job Computerized batch processing is a method of running software programs called jobs in batches automatically. While users are required to submit the jobs, no other interaction by the user is required to process the batch. Batches may automatically ...
(where), and screen (who produced error). It also holds information about exactly when the error occurred and the severity of the error. Also, there is an error event detail fact table with a foreign key to the main table that contains detailed information about in which table, record and field the error occurred and the error condition.


See also

* Data editing *
Data management Data management comprises all disciplines related to handling data as a valuable resource, it is the practice of managing an organization's data so it can be analyzed for decision making. Concept The concept of data management emerged alongsi ...
*
Data mining Data mining is the process of extracting and finding patterns in massive data sets involving methods at the intersection of machine learning, statistics, and database systems. Data mining is an interdisciplinary subfield of computer science and ...
* Database repair * Iterative proportional fitting *
Record linkage Record linkage (also known as data matching, data linkage, entity resolution, and many other terms) is the task of finding records in a data set that refer to the same entity across different data sources (e.g., data files, books, websites, and d ...
* Single customer view *
Triangulation (social science) In the social sciences, triangulation refers to the application and combination of several research methods in the study of the same phenomenon. By combining multiple observers, theories, methods, and empirical materials, researchers hope to overc ...


References


Further reading

* * * * *


External links


''Computerworld: Data Scrubbing''
(February 10, 2003) * Erhard Rahm, Hong Hai Do
''Data Cleaning: Problems and Current Approaches''

''Data cleansing''.
Datamanagement.wiki. {{Data Data quality