Hey Bhushan, thanks for checking out out tutorial! We're glad you liked it. Currently, we do not have such a tutorial. But, we have communicated your request to our team and we might come up with it in the future. Do subscribe to our channel to stay posted on upcoming tutorials. Cheers!
+vitalis emanuel, thanks for checking out our tutorial! We're glad you found it useful. Here's another video that we thought you might like: th-cam.com/video/m9v9lky3zcE/w-d-xo.html. Do subscribe to our channel to stay posted on upcoming tutorials. Cheers!
Hi Sir, first of all the session was gr8. But, you haven't mentioned the name of the book for Hadoop Administrator. Kindly, Provide the details of some of the best books which we can refer specially for Hadoop Admin. Waiting for your response.
Hey Manish, thanks for checking out the tutorial! We're glad you found it useful. Given below are titles of a few books which you can refer. - Hadoop Operations : A Guide for Developers and Administrators - Hadoop in Action (Manning) - Hadoop Real World Solutions Cookbook - Second Edition Hope this helps.
Very good tutorial, but l have one question : l have setup multicluster nodes and they working find , but how l can connect client to this cluster nodes , so all logs can be redirected to cluster ?
+Lulzim Veliu, thanks for checking out our tutorial! We're glad you liked it. You will have to create users in the Cluster so that you can give access to these nodes with the different permissions and allow the access. Hope this helps. Cheers!
Thanks for awesome video. Just one question, how are you able to ssh from host to centos vm in Host-only mode? it is possible through bridge mode only right?
Hey Yogesh, thanks for checking out out tutorial! We're glad you liked it. Host-only is used to create a network containing the host and a set of virtual machines, without the need for the host's physical network interface. Instead, a virtual network interface (similar to a loopback interface) is created on the host, providing connectivity among virtual machines and the host. So through Host-Only you can ssh mong the virtual machines. Hope this helps. Cheers!
Sir, thank you for your video,,, can you help me plz. after (hdfs namenode -format ) i receive this msg INFO namenode.FSNamesystem: Stopping services started for active state and INFO namenode.FSNamesystem: Stopping services started for standby state
Hey Smita, thanks for checking out our tutorial. Books get out of date, our course doesn't. :) You might want to check out our Hadoop Administration course here: www.edureka.co/hadoop-admin. This live, online course is led by instructors who are industry practitioners and comes with 24X7 support and lifetime access to learning material. Please feel free to get in touch with us if you have any questions. Hope this helps. Cheers!
Hey Kazim, thanks for checking out the tutorial! Ubuntu is based on the venerable Debian distribution. CentOS is a free clone of Red Hat Enterprise Linux. A major factor that might influence web hosting clients to choose CentOS is web hosting control panel compatibility. Within the web hosting industry, CentOS dominates, and most web hosting control panels, including and cPanel, focus on RHEL derivatives like CentOS. If you plan to offer web hosting services using a control panel, then CentOS is probably your best bet. Hope this helps.
Question related to this video. After the .tar file is downloaded from the Apache website, how the same file would be available in all the nodes? Like NameNode and all the DataNodes?
Hey Swakshar, thanks for checking out our tutorial! If you are deploying a multi-node cluster, either you have to download it and install it on each of the machines or you can clone the VM after installing single node Hadoop cluster and then configure the systems accordingly as NN and DN. SCP is also an alternative for you to transfer the Hadoop files. You can find more info about multi node setup in this blog: www.edureka.co/blog/setting-up-a-multi-node-cluster-in-hadoop-2.X . Hope this helps. Cheers!
Hey Sagar, thanks for checking out the tutorial. In the core-site.xml we have to mention the namenode hostname to identify the namenode daemons. fs.defaultFS hdfs://: Feel free to revert if you face further doubts.
Hey Naresh, thanks for checking out our tutorial! We're glad you found it useful. Here's another video that we thought you might like: th-cam.com/video/-XkEX1onpEI/w-d-xo.html. Do subscribe to our channel to stay posted on upcoming tutorials. Cheers!
Hey Shubham, thanks for checking out our tutorial! Considering you're a beginner, a good way to do this to install, configure and test a “local” Hadoop setup for each of the two Ubuntu boxes. In the second step “merge” these two single-node clusters into one multi-node cluster in which one Ubuntu box will become the designated master (but also act as a slave with regard to data storage and processing), and the other box will become only a slave. It’s much easier to track down any problems you might encounter due to the reduced complexity of doing a single-node cluster setup first on each machine. You can refer to this blog for more info and practical steps on setting up a multinode cluster: www.edureka.co/blog/setting-up-a-multi-node-cluster-in-hadoop-2.X Hope this helps. Cheers!
+Ankana, thanks for checking out our tutorial! There might be a situation that some of the demons are not running, so please follow the steps written below, Open terminal STEP 1: cd $HADOOP_HOME ---->Enter cd sbin ./stop-all .sh ./start-all .sh STEP 2: After the above command executed , check the demons running on your system , and match with the screenshot given below, Type jps --->Enter 1735 NameNode 1851 ResourceManager 10305 Jps 1967 JobHistoryServer 1906 NodeManager 1785 DataNode [edureka@localhost ~] $ STEP 3: Run your command hdfs dfs -ls Hope this helps. Cheers!
I have C program written in 16 bit system , I have to generate big prime numbers unfortunately which i cannot. My question is how i can use cluster and use the power of more then one pc to generate even bigger prime number more then that what a single computer can generate. Thanking you .
Hey Ravindra, you can use this number to contact our team: +91 9066020868 aand if you are calling from USA, then use this number: 1844-230-6361. Hope this helps :)
clarified many things as my first hadoop learning video..great work.
Thanks, Deepak! Good to know that it helped you in your learning. Do follow our channel to stay posted on upcoming Hadoop tutorials.
What is the difference between Hadoop cluster and Hbase cluster?
Class starts 09:50
Do you have a tutorial to setup Apache Spark with Hadoop/Yarn in a 3-node cluster environment?
Hey Bhushan, thanks for checking out out tutorial! We're glad you liked it. Currently, we do not have such a tutorial. But, we have communicated your request to our team and we might come up with it in the future. Do subscribe to our channel to stay posted on upcoming tutorials. Cheers!
This is Malsoru, i am new, but i took Edureka Training.
thanks for the guide
thank you for the guide :)
+vitalis emanuel, thanks for checking out our tutorial! We're glad you found it useful. Here's another video that we thought you might like: th-cam.com/video/m9v9lky3zcE/w-d-xo.html.
Do subscribe to our channel to stay posted on upcoming tutorials. Cheers!
Hi Sir, first of all the session was gr8. But, you haven't mentioned the name of the book for Hadoop Administrator. Kindly, Provide the details of some of the best books which we can refer specially for Hadoop Admin.
Waiting for your response.
Hey Manish, thanks for checking out the tutorial! We're glad you found it useful. Given below are titles of a few books which you can refer.
- Hadoop Operations : A Guide for Developers and Administrators
- Hadoop in Action (Manning)
- Hadoop Real World Solutions Cookbook - Second Edition
Hope this helps.
Thanks for the information Sir. Please upload more and more videos like this. Keep up the Gr8 work +edureka! ...Cheers... :)
Thanks, Manish! :) Do follow our channel to stay posted on upcoming tutorials. Cheers!
Very good tutorial, but l have one question :
l have setup multicluster nodes and they working find , but how l can connect client to this cluster nodes , so all logs can be redirected to cluster ?
+Lulzim Veliu, thanks for checking out our tutorial! We're glad you liked it.
You will have to create users in the Cluster so that you can give access to these nodes with the different permissions and allow the access.
Hope this helps. Cheers!
Thanks for awesome video. Just one question, how are you able to ssh from host to centos vm in Host-only mode? it is possible through bridge mode only right?
Hey Yogesh, thanks for checking out out tutorial! We're glad you liked it.
Host-only is used to create a network containing the host and a set of virtual machines, without the need for the host's physical network interface. Instead, a virtual network interface (similar to a loopback interface) is created on the host, providing connectivity among virtual machines and the host. So through Host-Only you can ssh mong the virtual machines.
Hope this helps. Cheers!
Sir, thank you for your video,,, can you help me plz.
after (hdfs namenode -format ) i receive this msg
INFO namenode.FSNamesystem: Stopping services started for active state
and
INFO namenode.FSNamesystem: Stopping services started for standby state
Any good books on hadoop administration
Hey Smita, thanks for checking out our tutorial.
Books get out of date, our course doesn't. :) You might want to check out our Hadoop Administration course here: www.edureka.co/hadoop-admin. This live, online course is led by instructors who are industry practitioners and comes with 24X7 support and lifetime access to learning material. Please feel free to get in touch with us if you have any questions. Hope this helps. Cheers!
ocalhost: ERROR: Cannot set priority of resourcemanager process 9799 similarly in case of datanode resourcemanager nodemanager
Try kill all the daemons & restarting them. If you face the same issue, kindly share the logs with us.
Cheers :)
What are the benefits of using CentOS over Ubuntu as a choice of Linux flavour.
Hey Kazim, thanks for checking out the tutorial! Ubuntu is based on the venerable Debian distribution. CentOS is a free clone of Red Hat Enterprise Linux. A major factor that might influence web hosting clients to choose CentOS is web hosting control panel compatibility. Within the web hosting industry, CentOS dominates, and most web hosting control panels, including and cPanel, focus on RHEL derivatives like CentOS. If you plan to offer web hosting services using a control panel, then CentOS is probably your best bet. Hope this helps.
Question related to this video. After the .tar file is downloaded from the Apache website, how the same file would be available in all the nodes? Like NameNode and all the DataNodes?
Hey Swakshar, thanks for checking out our tutorial!
If you are deploying a multi-node cluster, either you have to download it and install it on each of the machines or you can clone the VM after installing single node Hadoop cluster and then configure the systems accordingly as NN and DN. SCP is also an alternative for you to transfer the Hadoop files. You can find more info about multi node setup in this blog: www.edureka.co/blog/setting-up-a-multi-node-cluster-in-hadoop-2.X .
Hope this helps. Cheers!
Which property was used to start UI on port 50070?
Hey Sagar, thanks for checking out the tutorial. In the core-site.xml we have to mention the namenode hostname to identify the namenode daemons.
fs.defaultFS
hdfs://:
Feel free to revert if you face further doubts.
nice video
Hey Naresh, thanks for checking out our tutorial! We're glad you found it useful.
Here's another video that we thought you might like: th-cam.com/video/-XkEX1onpEI/w-d-xo.html.
Do subscribe to our channel to stay posted on upcoming tutorials. Cheers!
i know the hadoop and how to set uphadope in single node
and want to know more about hadoop
i want to learn the multinode hadoop
Hey Shubham, thanks for checking out our tutorial! Considering you're a beginner, a good way to do this to install, configure and test a “local” Hadoop setup for each of the two Ubuntu boxes. In the second step “merge” these two single-node clusters into one multi-node cluster in which one Ubuntu box will become the designated master (but also act as a slave with regard to data storage and processing), and the other box will become only a slave. It’s much easier to track down any problems you might encounter due to the reduced complexity of doing a single-node cluster setup first on each machine.
You can refer to this blog for more info and practical steps on setting up a multinode cluster: www.edureka.co/blog/setting-up-a-multi-node-cluster-in-hadoop-2.X
Hope this helps. Cheers!
33 :00
when I am executing hadoop command giving hdfs dfs -ls.
It says no such file or directory. Please help
+Ankana, thanks for checking out our tutorial! There might be a situation that some of the demons are not running, so please follow the steps written below,
Open terminal
STEP 1:
cd $HADOOP_HOME ---->Enter
cd sbin
./stop-all .sh
./start-all .sh
STEP 2:
After the above command executed , check the demons running on your system , and match with the screenshot given below,
Type
jps --->Enter
1735 NameNode
1851 ResourceManager
10305 Jps
1967 JobHistoryServer
1906 NodeManager
1785 DataNode
[edureka@localhost ~] $
STEP 3:
Run your command
hdfs dfs -ls
Hope this helps. Cheers!
I have C program written in 16 bit system , I have to generate big prime numbers unfortunately which i cannot. My question is how i can use cluster and use the power of more then one pc to generate even bigger prime number more then that what a single computer can generate. Thanking you .
please give me customer care number the number in the description is unable to connect
Hey Ravindra, you can use this number to contact our team: +91 9066020868 aand if you are calling from USA, then use this number: 1844-230-6361.
Hope this helps :)