Hadoop is one of the frameworks which have applications to be developed to the distributed computing concepts.
When we talk about Hadoop then there is two major components which is core of Hadoop:
First is file system which is also known as Hadoop distributed file system (HDFS).
And then there is programming framework in top of it which works with HDFS which is known as Map-Reduce framework. Many ways you can think Hadoop provide a layer between the user and the clusters of machine under it. And it has many features like an operating system would have which manages multiple nodes under it. So as a user you need not worry about the multiple storage and computing resources they would be handed by Hadoop and would be abstracted from the user’s point of view. HDFS is inspired from Google file system, and Hadoop map reduce is inspired from good map reduce papers. Both map reduce and HDFS work on cluster of systems both of them have hierarchical architecture that is there is master slave model, in broad view what happens is that a large file is broken into small portion of the file which is known as blocks then it is replicated and distributed over the clusters of computer this distribution is manage by Hadoop itself and user need not worry about the division and distribution of file. Like an operating system Hadoop manages the file system internally what happens is that there is one master node known as name node which looks over a data is distributed among data node and keeps track of the distribution of the blocks so basically name node manages the file system and the data node actually stores the data blocks. Both name node and data node are Hadoop daemons, actually java programs that run on specific machine so do not think that a hardware components but the machines that run Hadoop daemon name node need to be more powerful then the machine runs on data nodes, enhance there is a difference between a specification and configurations of the machines and so the physical machines are usually refer to as name node and data nodes but actually just they are a java programs.
Essay: Hadoop
Essay details and download:
- Subject area(s): Information technology essays
- Reading time: 2 minutes
- Price: Free download
- Published: 27 October 2015*
- Last Modified: 29 September 2024
- File format: Text
- Words: 370 (approx)
- Number of pages: 2 (approx)
Text preview of this essay:
This page of the essay has 370 words.
About this essay:
If you use part of this page in your own work, you need to provide a citation, as follows:
Essay Sauce, Hadoop. Available from:<https://www.essaysauce.com/information-technology-essays/essay-hadoop/> [Accessed 20-11-24].
These Information technology essays have been submitted to us by students in order to help you with your studies.
* This essay may have been previously published on EssaySauce.com and/or Essay.uk.com at an earlier date than indicated.