HI Durga, Thanks for the video, I had watched similar kind of video where the user raised the question of how can we verify the list of blocks for the copied file over the cluster but the instructor said that it's not possible to check the list of blcoks after copying the file to hdfs file system. I got to know we can use hdfs -fsck command to verify this after watching this vedio. thanks
For any technical discussions or doubts, please use our forum - discuss.itversity.com For practicing on state of the art big data cluster, please sign up on - labs.itversity.com Lab is under free preview until 12/31/2016 and after that subscription charges are 14.99$ per 31 days, 34.99$ per 93 days and 54.99$ per 185 days
Hi Durga, can we change the block size of already live cluster(without stopping it) for any new file or already existing file. Means i want to change the block size of cluster so that i can save already saved file in more no. of Nodes.
the command is not for searching a file in HDFS. It is for searching files in local file system. xargs is linux command where it will take output of the previous command and apply another command.
Hello Sir, I'm new to Hadoop EcoSystem, So could you please answer the below questions? when we will change the block size and how does is it impact to existing files? when we will increase the replication factor and how does is it impact to existing files?
Nothing will happen for existing files when you change replication factor and block size in hdfs-site.xml, it is only in effect going forward. These can be overridden at run time.
Hi Durga Nice good information.. I have below doubts. 1. When we store large deck file in HDFS. this file stores in blocks in different nodes (or) blocks in samenode? 2. Can you explain bit more about HDFS logical file sytem? I didn't understand your point
blocks will be stored in different nodes in a multi node cluster.You watch the rest of the videos of HDFS and if you still have the questions, then I can respond.
HI Durga,
Thanks for the video, I had watched similar kind of video where the user raised the question of how can we verify the list of blocks for the copied file over the cluster but the instructor said that it's not possible to check the list of blcoks after copying the file to hdfs file system.
I got to know we can use hdfs -fsck command to verify this after watching this vedio.
thanks
Thank you for acknowledging.
For any technical discussions or doubts, please use our forum - discuss.itversity.com
For practicing on state of the art big data cluster, please sign up on - labs.itversity.com
Lab is under free preview until 12/31/2016 and after that subscription
charges are 14.99$ per 31 days, 34.99$ per 93 days and 54.99$ per 185 days
Could you please number the videos to aboid jumping from one to another.
Appreciate your efforts.
Hi Durga, can we change the block size of already live cluster(without stopping it) for any new file or already existing file. Means i want to change the block size of cluster so that i can save already saved file in more no. of Nodes.
In video , you said you have explained about hdfs-site.xml and config files and validations , but I didn’t see that In earlier 3 videos .
Sir,..I couldn't find the video on configuration files and validation..Please let me know where it is
from duration 5:21 to 5:24 what you told .. it was not clear, please reply
Hi Sir,
For searching a file in HDFs you can used find . --name "largest.txt"|xargs ls -ltr. so here |xargs stands for ?
the command is not for searching a file in HDFS. It is for searching files in local file system. xargs is linux command where it will take output of the previous command and apply another command.
Hello Sir,
I'm new to Hadoop EcoSystem, So could you please answer the below questions?
when we will change the block size and how does is it impact to existing files?
when we will increase the replication factor and how does is it impact to existing files?
Nothing will happen for existing files when you change replication factor and block size in hdfs-site.xml, it is only in effect going forward. These can be overridden at run time.
can you share these ppts
Hi Durga Nice good information..
I have below doubts.
1. When we store large deck file in HDFS. this file stores in blocks in different nodes (or) blocks in samenode?
2. Can you explain bit more about HDFS logical file sytem? I didn't understand your point
blocks will be stored in different nodes in a multi node cluster.You watch the rest of the videos of HDFS and if you still have the questions, then I can respond.
Boss you are not clear.. some interruptions are there.
Sir,..I couldn't find the video on configuration files and validation..Please let me know where it is
from duration 5:21 to 5:24 what you told .. it was not clear, please reply