Programming with Big Data in R (pbdR) is a series of
R packages and an environment for
statistical computing
Computational statistics, or statistical computing, is the study which is the intersection of statistics and computer science, and refers to the statistical methods that are enabled by using computational methods. It is the area of computational ...
with
big data
Big data primarily refers to data sets that are too large or complex to be dealt with by traditional data processing, data-processing application software, software. Data with many entries (rows) offer greater statistical power, while data with ...
by using high-performance statistical computation. The pbdR uses the same programming language as R with
S3/S4 classes and methods which is used among
statistician
A statistician is a person who works with Theory, theoretical or applied statistics. The profession exists in both the private sector, private and public sectors.
It is common to combine statistical knowledge with expertise in other subjects, a ...
s and
data miners
Data mining is the process of extracting and finding patterns in massive data sets involving methods at the intersection of machine learning, statistics, and database systems. Data mining is an interdisciplinary subfield of computer science and s ...
for developing
statistical software
The following is a list of statistical software.
Open-source
* ADaMSoft – a generalized statistical software with data mining algorithms and methods for data management
* ADMB – a software suite for non-linear statistical modeling based on C+ ...
. The significant difference between pbdR and R code is that pbdR mainly focuses on
distributed memory
In computer science, distributed memory refers to a Multiprocessing, multiprocessor computer system in which each Central processing unit, processor has its own private Computer memory, memory. Computational tasks can only operate on local data ...
systems, where data are distributed across several processors and analyzed in a
batch mode
Batch may refer to:
Food and drink
* Batch (alcohol), an alcoholic fruit beverage
* Batch loaf, a type of bread popular in Ireland
* A dialect term for a bread roll used in North Warwickshire, Nuneaton and Coventry, as well as on the Wirral, ...
, while communications between processors are based on
MPI
MPI or Mpi may refer to:
Science and technology Biology and medicine
* Magnetic particle imaging, a tomographic technique
* Myocardial perfusion imaging, a medical procedure that illustrates heart function
* Mannose phosphate isomerase, an enzyme ...
that is easily used in large
high-performance computing (HPC) systems. R system mainly focuses on single
multi-core
A multi-core processor (MCP) is a microprocessor on a single integrated circuit (IC) with two or more separate central processing units (CPUs), called ''cores'' to emphasize their multiplicity (for example, ''dual-core'' or ''quad-core''). Ea ...
machines for data analysis via an interactive mode such as
GUI interface.
Two main implementations in
R using
MPI
MPI or Mpi may refer to:
Science and technology Biology and medicine
* Magnetic particle imaging, a tomographic technique
* Myocardial perfusion imaging, a medical procedure that illustrates heart function
* Mannose phosphate isomerase, an enzyme ...
are Rmpi
and pbdMPI of pbdR.
* The pbdR built on pbdMPI uses
SPMD parallelism where every processor is considered as worker and owns parts of data. The
SPMD parallelism introduced in mid 1980 is particularly efficient in homogeneous computing environments for large data, for example, performing
singular value decomposition
In linear algebra, the singular value decomposition (SVD) is a Matrix decomposition, factorization of a real number, real or complex number, complex matrix (mathematics), matrix into a rotation, followed by a rescaling followed by another rota ...
on a large matrix, or performing
clustering analysis on high-dimensional large data. On the other hand, there is no restriction to use
manager/workers parallelism in
SPMD parallelism environment.
* The Rmpi
[ uses manager/workers parallelism where one main processor (manager) serves as the control of all other processors (workers). The manager/workers parallelism introduced around early 2000 is particularly efficient for large tasks in small ]clusters
may refer to:
Science and technology Astronomy
* Cluster (spacecraft), constellation of four European Space Agency spacecraft
* Cluster II (spacecraft), a European Space Agency mission to study the magnetosphere
* Asteroid cluster, a small ...
, for example, bootstrap method and Monte Carlo simulation
Monte Carlo methods, or Monte Carlo experiments, are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results. The underlying concept is to use randomness to solve problems that might be det ...
in applied statistics since i.i.d. assumption is commonly used in most statistical analysis
Statistical inference is the process of using data analysis to infer properties of an underlying probability distribution.Upton, G., Cook, I. (2008) ''Oxford Dictionary of Statistics'', OUP. . Inferential statistical analysis infers properties of ...
. In particular, task pull parallelism has better performance for Rmpi in heterogeneous computing environments.
The idea of SPMD parallelism is to let every processor do the same amount of work, but on different parts of a large data set. For example, a modern GPU
A graphics processing unit (GPU) is a specialized electronic circuit designed for digital image processing and to accelerate computer graphics, being present either as a discrete video card or embedded on motherboards, mobile phones, personal ...
is a large collection of slower co-processors that can simply apply the same computation on different parts of relatively smaller data, but the SPMD parallelism ends up with an efficient way to obtain final solutions (i.e. time to solution is shorter).
Package design
Programming with pbdR requires usage of various packages developed by pbdR core team. Packages developed are the following.
Among these packages, pbdMPI provides wrapper functions to MPI
MPI or Mpi may refer to:
Science and technology Biology and medicine
* Magnetic particle imaging, a tomographic technique
* Myocardial perfusion imaging, a medical procedure that illustrates heart function
* Mannose phosphate isomerase, an enzyme ...
library, and it also produces a shared library
In computing, a library is a collection of System resource, resources that can be leveraged during software development to implement a computer program. Commonly, a library consists of executable code such as compiled function (computer scienc ...
and a configuration file for MPI environments. All other packages rely on this configuration for installation and library loading that avoids difficulty of library linking and compiling. All other packages can directly use MPI functions easily.
* pbdMPI --- an efficient interface to MPI either OpenMPI
Open MPI is a Message Passing Interface (MPI) library project combining technologies and resources from several other projects (FT-MPI, LA-MPI, LAM/MPI, and PACX-MPI). It is used by many TOP500 supercomputers including Roadrunner, which was th ...
or MPICH2 with a focus on Single Program/Multiple Data (SPMD
In computing, single program, multiple data (SPMD) is a term that has been used to refer to computational models for exploiting parallelism whereby multiple processors cooperate in the execution of a program in order to obtain results faster. ...
) parallel programming style
* pbdSLAP --- bundles scalable dense linear algebra libraries in double precision for R, based on ScaLAPACK The ScaLAPACK (or Scalable LAPACK) library includes a subset of LAPACK routines redesigned for distributed memory MIMD parallel computers. It is currently written in a Single-Program-Multiple-Data style using explicit message passing for interpro ...
version 2.0.2 which includes several scalable linear algebra packages (namely BLACS, PBLAS, and ScaLAPACK The ScaLAPACK (or Scalable LAPACK) library includes a subset of LAPACK routines redesigned for distributed memory MIMD parallel computers. It is currently written in a Single-Program-Multiple-Data style using explicit message passing for interpro ...
).
* pbdNCDF4 --- interface to Parallel Unidata NetCDF
NetCDF (Network Common Data Form) is a set of software libraries and self-describing, machine-independent data formats that support the creation, access, and sharing of array-oriented scientific data. The project homepage is hosted by the Unidat ...
4 format data files
* pbdBASE --- low-level ScaLAPACK The ScaLAPACK (or Scalable LAPACK) library includes a subset of LAPACK routines redesigned for distributed memory MIMD parallel computers. It is currently written in a Single-Program-Multiple-Data style using explicit message passing for interpro ...
codes and wrappers
* pbdDMAT --- distributed matrix classes and computational methods, with a focus on linear algebra and statistics
* pbdDEMO --- set of package demonstrations and examples, and this unifying vignette
* pmclust --- parallel model-based clustering using pbdR
* pbdPROF --- profiling package for MPI codes and visualization of parsed stats
* pbdZMQ --- interface to ØMQ
ZeroMQ (also spelled ØMQ, 0MQ or ZMQ) is an asynchronous messaging library, aimed at use in distributed or concurrent applications. It provides a message queue, but unlike message-oriented middleware, a ZeroMQ system can run without a dedicated ...
* remoter --- R client with remote R servers
* pbdCS --- pbdR client with remote pbdR servers
* pbdRPC --- remote procedure call
* kazaam --- very tall and skinny distributed matrices
* pbdML --- machine learning toolbox
Among those packages, the pbdDEMO package is a collection of 20+ package demos which offer example uses of the various pbdR packages, and contains a vignette that offers detailed explanations for the demos and provides some mathematical or statistical insight.
Examples
Example 1
Hello World! Save the following code in a file called "demo.r"
### Initial MPI
library(pbdMPI, quiet = TRUE)
init()
comm.cat("Hello World!\n")
### Finish
finalize()
and use the command
mpiexec -np 2 Rscript demo.r
to execute the code where Rscript is one of command line executable program.
Example 2
The following example modified from pbdMPI illustrates the basic syntax of the language of pbdR.
Since pbdR is designed in SPMD
In computing, single program, multiple data (SPMD) is a term that has been used to refer to computational models for exploiting parallelism whereby multiple processors cooperate in the execution of a program in order to obtain results faster. ...
, all the R scripts are stored in files and executed from the command line via mpiexec, mpirun, etc. Save the following code in a file called "demo.r"
### Initial MPI
library(pbdMPI, quiet = TRUE)
init()
.comm.size <- comm.size()
.comm.rank <- comm.rank()
### Set a vector x on all processors with different values
N <- 5
x <- (1:N) + N * .comm.rank
### All reduce x using summation operation
y <- allreduce(as.integer(x), op = "sum")
comm.print(y)
y <- allreduce(as.double(x), op = "sum")
comm.print(y)
### Finish
finalize()
and use the command
mpiexec -np 4 Rscript demo.r
to execute the code where Rscript is one of command line executable program.
Example 3
The following example modified from pbdDEMO illustrates the basic ddmatrix computation of pbdR which performs singular value decomposition
In linear algebra, the singular value decomposition (SVD) is a Matrix decomposition, factorization of a real number, real or complex number, complex matrix (mathematics), matrix into a rotation, followed by a rescaling followed by another rota ...
on a given matrix.
Save the following code in a file called "demo.r"
# Initialize process grid
library(pbdDMAT, quiet=T)
if(comm.size() != 2)
comm.stop("Exactly 2 processors are required for this demo.")
init.grid()
# Setup for the remainder
comm.set.seed(diff=TRUE)
M <- N <- 16
BL <- 2 # blocking --- passing single value BL assumes BLxBL blocking
dA <- ddmatrix("rnorm", nrow=M, ncol=N, mean=100, sd=10)
# LA SVD
svd1 <- La.svd(dA)
comm.print(svd1$d)
# Finish
finalize()
and use the command
mpiexec -np 2 Rscript demo.r
to execute the code where Rscript is one of command line executable program.
Further reading
*
*
*
*
*
This article was read 22,584 times in 2012 since it posted on October 16, 2012, and ranked number 3
*
*
*
References
External links
*
{{DEFAULTSORT:PbdR
Cross-platform free software
Data mining and machine learning software
Data-centric programming languages
Free statistical software
Functional languages
Numerical analysis software for Linux
Numerical analysis software for macOS
Numerical analysis software for Windows
Parallel computing