Excavating Big Data associated to Indian Elections Scenario via Apache Hadoop

Main Article Content

Dr.Gagandeep Jagdev
Amandeep Kaur, Amandeep Kaur

Abstract

Data is not a new term in the field of computer
science, but Big Data is essentially a new word. When data grows
beyond the capacity of currently existing database tools, it begins
to be referred as Big Data. Big Data posses a grand challenge for
both data analytics and database. It has been only in 2013 to 2015
that we humans have created 90 percent of data existing on the
planet earth since existence of humans on this planet. The huge
technological up gradation in social network, in retail industry, in
health sector, in engineering disciplines, in the field of wireless
sensors, in stock market, in public and private sector, all has
collectively amassed enormous data. This data is very huge in
volume, it gets created at very high speed, it may be structured,
unstructured, semi-structured or may be in text, audio or video
format and most important that it is not totally precise and can be
messy or misleading. The central theme of our research work is
concerned with handling huge amount of data that is concerned
with different formats of elections that are been contested in India.
The framework used in this research work is Apache Hadoop.
Apache Hadoop framework makes use of Map-Reduce technology
which operates in three steps: mapping, shuffling and reduction.
Map-Reduce is the same technique which Facebook use for
handling its section of “People you may knowâ€. Research paper
also discusses the working of Map-Reduce technology with
competent examples.


Keywords - Big Data, Big Data analytics, elections, Hadoop
framework, Map-Reduce.

Downloads

Download data is not yet available.

Article Details

Section
Articles