RESEARCH PAPER MapReduce with Hadoop for Simplified Analysis of Big Data
Abstract
With the development of web based applications and mobile computer technology there is a rapid growth of data, their computations and analysis continuously in the recent years. Various fields around the globe are facing a big problem with this large scale data which highly supports in decision making. The traditional relational DBMS’s were unable to handle this Big Data. The most classical data mining methods are also not suitable for handling this big data. Efficient algorithms are required to process Big Data. Out of the many parallel algorithms MapReduce is adopted by many popular and huge IT companies such as Google, Yahoo, FaceBook etc. In Big data world MapReduce has been playing a vital role in meeting the increasing demands on computing resources affected by voluminous data sets. MapReduce is a popular programming model suitable for Big Data Analysis in distributed and parallel computing. The high scalability of MapReduce is one of the reasons for adapting this model. Hadoop is an open source; distributed programming framework with enables the storage and processing of large data sets. [1] In this paper we try to focus especially on MapReduce with Hadoop for the analytical processing of big data.
Keywords
Big Data, Hadoop, MapReduce, BigData Analytics.
Full Text:
PDFDOI: https://doi.org/10.26483/ijarcs.v8i5.3458
Refbacks
- There are currently no refbacks.
Copyright (c) 2017 International Journal of Advanced Research in Computer Science

