I followed exactly this process. every thing works until 28:44. where I got Live Node :0 ( which should be 3 as in the video.) Any suggestions to help me out?
Great tutorial! Before I even try to do all this massive configuration (on physical machines), I may ask... With this Hadoop thing I can do anything that requires more processing power? (from VM hosting to video rendering?) I mean, with this setup I can even host like... a game server?
Thank you so much! But I have a problem, "start-all.sh" from hadoopmaster don't start the hadoopslaves. I have to "start-all.sh" from all the slaves as well. Then my 5-node cluster is up and running. How can I start the whole cluster from hadoopmaster? Also, SecondaryNameNode is running on all the slaves, is that a problem? And if so, how can I remove them from the slaves?
Thank you for the tutorial.In my multinode hadoop installation everything is working fine except when I run start-yarn.sh.My resource manager started on master node correctly but on name nodes I am not able to see any nodemanager. When I checked my log files I saw the following errors "NodeManager doesn't satisfy minimum allocations" can you please help me to figure out this issue.
Hey, thank you for your useful tutorials! I have been able to set up a single node cluster following your tutorials and was trying to create multi node cluster using my two laptops. And here is my question: you don't say anything to the way the computers are connected to each other. Is it ok if I connect them via ssh? Or should it be something like NFS or SMABA? The next question is: in case of two computers, how many slaves do I have? Thank you in advance!
Chechen Batman in vmware all 4 node are interconnected so u dont have to worry about it th-cam.com/play/PLPNBK2jcH438IhiCENBPqbzs3Az6okAU9.html follow this tut for some network connection stuff
chaal pritam please help me. I run the program pi Examples of hadoop, but does not show me the applications running on localhost: 8088 I see only the nodes that I have connected, but nothing appears applications. please help, I want to see the application running on localhost: 8088, as is running on: docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.1.15/bk_using-apache-hadoop/content/running_mapreduce_examples_on_yarn.html
Hi Chaal working for me the slaves.... i'm doing my tesis based in hadoop, do you have or do you know about more examples with mapreduce. I will appreciate it . Thanks!
I have C program written in 16 bit system , I have generate big prime numbers unfortunately which i cannot. My question is how i can use cluster and use the power of more then one pc to generate even bigger prime number more then that what a single computer can generate. Thanking you .
when I am trying to create a slave machine with different user name , i am getting prompt to enter password for masteruser@slavehost. what ever the combinations I use it is not working out.:(. However when I keep the same names it is working fine.
very nice tutorial! thank you very much... But I still have a question.. at the very first time, when you just set up the hostnames, why are you assigning 192.168.23.133, 134, 135 to your slaves, when you still dont know the ip addresses of the slaves? I am using virtualbox on ubuntu 14.10, and since every slave is a copy of the master, every slave has the same ip. Do you know how to fix this?
Simon Cardenas vmware auto assign network config as when we know master ip we can predict ip of next vm as it assign ip in numerical order like 133.134.135 :) in virtual box we should configure network check this video for virtual box network config th-cam.com/video/HTj6Tf5676w/w-d-xo.html th-cam.com/video/DteSiloXesw/w-d-xo.html
ok it me sound a little novice on this, but I explain a little bit my case: I created the master with ip address 10.0.2.15. When I cloned the master and renamed to slave1,2,3 every slave had the same ip 10.0.2.15, so I changed it on /etc/network/interfaces to static eth0, to 10.0.2.16,17,18 respectively. I made the respective changes on /etc/hosts on every node, but they still can't ping each other. Do you know if that is because of using virtualbox, or I am making another mistakes? (thank you very much for your fast answer.)
thanks for installation guide.. i followed the exact same steps of installation given in this video.. everything works fine but when i try to execute hadoop namenode -format, its showing me the error that hadoop: command not found. Can you please solve this error, is there anything that i missed.. please reply its very urgent.
Hi Pritam, Many thanks for the wonderful video. It really helped a lot. I just want to know why I am getting the following error, when I am typing ssh slave1 Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
Hi chaal pritam when I execute the command "ssh-copy-id -i ~/.ssh/id_dsa.pub chapritam@hadoopmaster(in my case is : hadoop@master)", I got error : failed to pen ID file 'home/hadoop/.ssh/id_dsa': nosuch file.. How to fix it :((
chaal pritam please help me. When I run this commend "sudo chown -R chaal pritam:chaal pritam /usr/local/hadoop". It shows me this fail message: chown: invalid user: ‘chaalpritam:chaalpritam’?
My slave node is not able to connect with master node on port 90000, Message is: WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting to server: master/192.168.1.100:9000, Can you please help me here
you configure is not same to hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-common/ClusterSetup.html. I don't known why mapred-site.xml need change framwork yarn to jobtracker. as I know, below 2.x will use jobtracker, which mapreduce1,contain jobtracker tasktracker, Thanks
Thank you for a wonderful video. For people new to Hadoop, this is exactly how a video lessons need to be. Thanks a lot!
great job dude ...u have done a lot of work towards humanity ...thank u pritam ...learned a lot
I followed exactly this process. every thing works until 28:44. where I got
Live Node :0 ( which should be 3 as in the video.)
Any suggestions to help me out?
Thanks. Please can you post a link to get the hadoop image. Thanks
Great tutorial!
Before I even try to do all this massive configuration (on physical machines), I may ask...
With this Hadoop thing I can do anything that requires more processing power? (from VM hosting to video rendering?)
I mean, with this setup I can even host like... a game server?
Thank you so much! But I have a problem, "start-all.sh" from hadoopmaster don't start the hadoopslaves. I have to "start-all.sh" from all the slaves as well. Then my 5-node cluster is up and running. How can I start the whole cluster from hadoopmaster? Also, SecondaryNameNode is running on all the slaves, is that a problem? And if so, how can I remove them from the slaves?
Thank You. Its really helpful.
How did you configured the netowrk?
Which hadoop virtual machine you are cloning? Can you send a link to Download that?
Will this work on raspberry pi? Instead of virtual machine...??
Thank you for the tutorial.In my multinode hadoop installation everything is working fine except when I run start-yarn.sh.My resource manager started on master node correctly but on name nodes I am not able to see any nodemanager. When I checked my log files I saw the following errors "NodeManager doesn't satisfy minimum allocations" can you please help me to figure out this issue.
Hey, thank you for your useful tutorials! I have been able to set up a single node cluster following your tutorials and was trying to create multi node cluster using my two laptops. And here is my question: you don't say anything to the way the computers are connected to each other. Is it ok if I connect them via ssh? Or should it be something like NFS or SMABA?
The next question is: in case of two computers, how many slaves do I have?
Thank you in advance!
Chechen Batman in vmware all 4 node are interconnected so u dont have to worry about it th-cam.com/play/PLPNBK2jcH438IhiCENBPqbzs3Az6okAU9.html follow this tut for some network connection stuff
chaal pritam The problem is, that I am not using vmware...
the link i gave above uses virtual box :) it has some steps on cofiguring network so u can use it
chaal pritam please help me.
I run the program pi Examples of hadoop, but does not show me the applications running on localhost: 8088
I see only the nodes that I have connected, but nothing appears applications.
please help, I want to see the application running on localhost: 8088, as is running on: docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.1.15/bk_using-apache-hadoop/content/running_mapreduce_examples_on_yarn.html
Luis Coba sorry :) i dont have time to try it
chaal pritam if i ever try it :) i ll post the video
Hi Chaal working for me the slaves.... i'm doing my tesis based in hadoop, do you have or do you know about more examples with mapreduce. I will appreciate it . Thanks!
ariana ruiz yes but i never done any other videos than this on hadoop
I need to know how to do this on a raspberry pi. Help me pleasee
I have C program written in 16 bit system , I have generate big prime numbers unfortunately which i cannot. My question is how i can use cluster and use the power of more then one pc to generate even bigger prime number more then that what a single computer can generate. Thanking you .
Thanks.. It worked like a charm.
how to check all resources in a graphical way? I mean all slave and master nodes?
how do i locate hadoop java files if i want to make changes to the code.
bro how can v distribute a memory in map and yarn site for using limit vcores and mapreduce memory? can u help me
That was very helpful! Thanks a tonne!
Is good your guide with the clusters :)
it showing only one live nodes out of three data nodes
plz resolve this problem
this really helped a lot!! thank you
one of the best of Hadoop setup
guide
Good one
Hi, do we need to give the same user name in all the machines. (master & slaves)
PAWAN CHINTAKUNTA nope
when I am trying to create a slave machine with different user name , i am getting prompt to enter password for masteruser@slavehost. what ever the combinations I use it is not working out.:(. However when I keep the same names it is working fine.
i never tried of diff user for slave machine :) but even though it should work
very nice tutorial! thank you very much...
But I still have a question..
at the very first time, when you just set up the hostnames, why are you assigning 192.168.23.133, 134, 135 to your slaves, when you still dont know the ip addresses of the slaves? I am using virtualbox on ubuntu 14.10, and since every slave is a copy of the master, every slave has the same ip. Do you know how to fix this?
Simon Cardenas vmware auto assign network config as when we know master ip we can predict ip of next vm as it assign ip in numerical order like 133.134.135 :) in virtual box we should configure network check this video for virtual box network config th-cam.com/video/HTj6Tf5676w/w-d-xo.html th-cam.com/video/DteSiloXesw/w-d-xo.html
ok it me sound a little novice on this, but I explain a little bit my case:
I created the master with ip address 10.0.2.15. When I cloned the master and renamed to slave1,2,3 every slave had the same ip 10.0.2.15, so I changed it on /etc/network/interfaces to static eth0, to 10.0.2.16,17,18 respectively. I made the respective changes on /etc/hosts on every node, but they still can't ping each other. Do you know if that is because of using virtualbox, or I am making another mistakes? (thank you very much for your fast answer.)
have u configured network adapter in virtual box
There is no sound. Is it rare codec?
Daneel Yaitskov yes :) i just record what i do and upload it :) i dont make videos specifically for youtube
Girish Jaiswal: Can you please let me know which player it is compatible with
thanks for installation guide.. i followed the exact same steps of installation given in this video.. everything works fine but when i try to execute hadoop namenode -format, its showing me the error that hadoop: command not found. Can you please solve this error, is there anything that i missed.. please reply its very urgent.
HELP ! PLEASE
connect to host hadoopslave1 port22: no route to host
+Kuchta did you found out what is the problem? can you share it here please?
Had the same problem! Check the Ip config of the nodes using ifconfig and then make the necessary changes in /etc/hosts file
Great tutorial! :)
Hi Pritam,
Many thanks for the wonderful video. It really helped a lot.
I just want to know why I am getting the following error, when I am typing
ssh slave1
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
aqueel rahman delete and create new one
chaal pritam you mean ssh keys, right?
aqueel rahman yes
chaal pritam nope.. still not happening.. :( it asks for password thrice and then throws the same error.
aqueel rahman if you have entered password while creating keygen then it prompts fr password
Thanksyou. I learn Hadoop. :)
Great video , but i have 3 slaves when i browse master i found Live nodes =0 ?????
Hi chaal pritam
when I execute the command "ssh-copy-id -i ~/.ssh/id_dsa.pub chapritam@hadoopmaster(in my case is : hadoop@master)", I got error : failed to pen ID file 'home/hadoop/.ssh/id_dsa': nosuch file..
How to fix it :((
HUYNH CONG VIET NGU prior to copying ssh keygen u shuld first create ssh keys
i got the same problem...please help me out...thank you
i saved my keygen like " cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys"
chaal pritam please help me.
When I run this commend "sudo chown -R chaal pritam:chaal pritam /usr/local/hadoop".
It shows me this fail message: chown: invalid user: ‘chaalpritam:chaalpritam’?
My slave node is not able to connect with master node on port 90000,
Message is: WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting to server: master/192.168.1.100:9000,
Can you please help me here
you configure is not same to hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-common/ClusterSetup.html.
I don't known why mapred-site.xml need change framwork yarn to jobtracker.
as I know, below 2.x will use jobtracker, which mapreduce1,contain jobtracker tasktracker, Thanks
Thanks.
Greate
sorry it's 9000