Background
Traditional information system architectures are based on an application-centric mindset. Traditionally, applications were installed, kept relatively static, updated infrequently, and utilized a fixed set of compute, storage, and networking elements to cope with a relatively small set of structured data. This approach functioned well for decades, but over the past decade, data growth, particularly unstructured data growth, put new pressures on organizations, information architectures and data center infrastructure. 90% of new data is unstructured and, according to a 2018 report, 59% of organizations manage over 10 billion files and objects spread over large numbers of servers and storage nodes. Organizations are struggling to cope with exponential data growth while seeking better approaches to extracting insights from that data using services including Big Data analytics and machine learning. However, existing architectures aren't built to address service requirements at petabyte scale and beyond without significant performance limits. Traditional architectures fail to fully store, retrieve, move and utilize that data because due to limitations of hardware infrastructure as well as application-centric systems design, development, and management. Data-centric workloads There are two problems data-centric computing aims to address. # Organizations need to utilize all available data but traditional applications aren't sufficiently agile or flexible. New shifts toward constant service innovation, supported by emerging approaches to service delivery (including microservices and containers) open new possibilities that step away from traditional application-centric mindsets. # Existing limits of data center hardware also restricts complete movement, management and utilization of unstructured data sets. Conventional CPUs are impeding performance because they do not include specialized capabilities needed for storage, networking, and analysis. Slow storage, including hard drives and SAS/SATA solid state drives over the network can reduce performance and limit data accessibility. New hardware capabilities are needed. Data-centric computing Data-centric computing is an approach that merges innovative hardware and software to treat data, not applications, as the permanent source of value. Data-centric computing aims to rethink both hardware and software to extract as much value as possible from existing and new data sources. It increases agility by prioritizing data transfer and data computation over static application performance and resilience. Data-centric hardware and software To meet the goals of data-centric computing, data center hardware infrastructure will evolve to address massive scale, rapid growth, the need for very high performance data movement, and extensive calculation requirements. * Distributed hardware infrastructures become the norm, with data and services spread across many compute and storage nodes, both in public clouds and on-premise. * Due to the flattening ofReferences
{{reflist Data centers Data management