Thank you for uploading such video , It is inspiring for people like who are trying to get into Big data . Any suggestion for higher experience people like 10-11 years exp. I am am having 11+ years and trying to get into big data .
Thanks GK codelabs and Sanatan. I am applying jobs with Spark, Databricks, Scala with Azure Data Lake and Data Factory. I have hands on experience but no project experience. I have few questions, if you could please help: 1. When asked about use case in my project, I am not able give any satisfactory answer? 2. In case of incremental load, if there are suppose 70 columns , how will you compare and find the change. When I answered, that with the help of where clause comparing each columns, the interviewer was not satisfied. If you could please help how to handle such scenarios in production environment?
Hi Santanu, We already have END to END big data pipeline courses available in PySpark and SCALA, you can check them out at gkcodelabs.com This should provide you many use cases as well as complete idea of Big Data project implementation.
33:45 to 37:45
Thank you team , helps a lot for people attending Big data interviews
Thank you for uploading such video , It is inspiring for people like who are trying to get into Big data . Any suggestion for higher experience people like 10-11 years exp. I am am having 11+ years and trying to get into big data .
Awesome video!! Motivated.
Thank you GKCodelabs.
May I know what %hike can we ask while switching?
even I'm cleared 1st round in my first interview i have two year of experience but second round i'm rejected
Tq sir can fresher servive in big data without experience.
Good job.. thanks 👍👍
Hi actually folks looking for more scenario base session kindly prepare more sessions on it
Thanks GK codelabs and Sanatan. I am applying jobs with Spark, Databricks, Scala with Azure Data Lake and Data Factory. I have hands on experience but no project experience.
I have few questions, if you could please help:
1. When asked about use case in my project, I am not able give any satisfactory answer?
2. In case of incremental load, if there are suppose 70 columns , how will you compare and find the change. When I answered, that with the help of where clause comparing each columns, the interviewer was not satisfied. If you could please help how to handle such scenarios in production environment?
Hi Santanu,
We already have END to END big data pipeline courses available in PySpark and SCALA, you can check them out at gkcodelabs.com
This should provide you many use cases as well as complete idea of Big Data project implementation.
@@GKCodelabs Hi useful videos, actually looking for more advanced scenario based usecases
Try "intersect" between new and old on 70 columns
👍👍👍👍