What is HDFS ?
Hadoop accompanies a circulated record framework called HDFS. In HDFS information is dispersed more than a few machines and duplicated to guarantee their toughness to disappointment and high accessibility to resemble application.
It is savvy as it utilizes product equipment. It includes the idea of squares, information hubs and hub name.

Where to utilize HDFS ?
Exceptionally Large Files: Files ought to be of many megabytes, gigabytes or more.
Streaming Data Access: an opportunity to peruse entire informational collection is a higher priority than idleness in perusing the first. HDFS is based on compose once and read-commonly design.
Where not to utilize HDFS ?
Low Latency information access: Applications that require extremely less an ideal opportunity to get to the main information ought not utilize HDFS as it is offering significance to entire information as opposed to time to bring the principal record.
Bunches Of Small Files:The name hub contains the metadata of records in memory and if the documents are little in size it takes a great deal of memory for name hub’s memory which isn’t achievable.
Different Writes: It ought not be utilized when we need to compose on numerous occasions.

HDFS Concepts
- Squares: A Block is the base measure of information that it can peruse or write.HDFS blocks are 128 MB naturally and this is configurable.Files n HDFS are broken into block-sized chunks,which are put away as free units.Unlike a document framework, assuming the record is in HDFS is more modest than block size, it doesn’t possess full block?s size, for example 5 MB of record put away in HDFS of square size 128 MB takes 5MB of space only.The HDFS block size is enormous just to limit the expense of look for.
- Name Node: HDFS works in ace laborer design where the name hub goes about as master.Name Node is regulator and chief of HDFS as it probably is aware the status and the metadata of the relative multitude of records in HDFS; the metadata data being document consent, names and area of each block.The metadata are little, so it is put away in the memory of name node,allowing quicker admittance to information. Also the HDFS bunch is gotten to by numerous customers concurrently,so this data is taken care of bya single machine. The record framework tasks like opening, shutting, renaming and so on are executed by it.
- Information Node: They store and recover blocks when they are advised to; by customer or name hub. They report back to name hub intermittently, with rundown of squares that they are putting away. The information hub being a ware equipment likewise accomplishes crafted by block creation, erasure and replication as expressed by the name hub.