Big Data Hadoop: Aggregation Techniques
International Journal of Science and Research (IJSR)

International Journal of Science and Research (IJSR)
Call for Papers | Fully Refereed | Open Access | Double Blind Peer Reviewed

ISSN: 2319-7064


Downloads: 132 | Views: 331

Survey Paper | Computer Science & Engineering | India | Volume 4 Issue 12, December 2015 | Popularity: 6.8 / 10


     

Big Data Hadoop: Aggregation Techniques

Vidya Pol


Abstract: The term Big Data, refers to data sets whose size (volume), complexity (variability), and rate of growth (velocity) make them difficult to capture, manage, process or analyzed. To analyze this enormous amount of data Hadoop can be used. However, processing is often time-consuming. One way to decrease response time is to executing the job partially, where an approximate, early result becomes available to the user, before completion of job. The implementation of the technique will be on top of Hadoop which will help to sample HDFS blocks uniformly. We will evaluate this technique using real-world datasets and applications and we will try to demonstrate the systems performance in terms of accuracy and time. The objective of the proposed technique is to significantly improve the performance of Hadoop MapReduce for efficient Big Data processing.


Keywords: privacy preservation, security, e-healthcare systems, data mining, image feature extraction


Edition: Volume 4 Issue 12, December 2015


Pages: 432 - 435


DOI: https://www.doi.org/10.21275/NOV151945


Please Disable the Pop-Up Blocker of Web Browser

Verification Code will appear in 2 Seconds ... Wait



Text copied to Clipboard!
Vidya Pol, "Big Data Hadoop: Aggregation Techniques", International Journal of Science and Research (IJSR), Volume 4 Issue 12, December 2015, pp. 432-435, https://www.ijsr.net/getabstract.php?paperid=NOV151945, DOI: https://www.doi.org/10.21275/NOV151945

Top