Spark core -Rdd (flexible) high level apis- Df and Spark sql (easy to write query) Transformation n action Spark submit process Deployment modes Types of transformation Repartition n coalesce Methods for schema enforcement - ddl, struct Consecutive wins in sql
The guy answered very well ! Got the good idea on what to say and what to avoid during interview
This is awesome. Literally, every concept from Spark is covered. A must watch interview.
16:53
Broadcast join decided on the go or run time which is by Adaptive Query Execution not spark sql engine or catalytic optimizer as said
Great! This is very useful for anyone who wants to become a data engineer
Thanks for the videos.
It's very helpful!
When ever transformation applied it never created a dag rather than it created a lineage between rrds and action created a DAG
keep up the good work !
he answered to the point most of the questions very good
Great Initiative Sumit...Kudos to both the interviewer and the candidate conducting such an outstanding session.
Though it is a mock interview, I appreciate his calm and pleasant responses to all the questions!
`he is always looking at his left side. xD
Great answers!
Java used in Hadoop
Bound to work on mapreduce
Can only work on batch process not real time in map reduce
Continue this series
Well scored.
The million dollar question is...."Is he selected"..??? and how did he do in the 2nd round..??..2nd round questions please..
this is a demo QnA just for our understanding what questions are asked in DE interview
btw he got selected in Deloitte with 120% hike
cheers 🎉
If he doesn't get selected after knowing this much..feeling sad for the recruiter
basically, well interview
Sir pls provide the questions in description
Good explanation men😅
The row_number values for marks are not correct (35:16).
The correct output is:
Marks Row_number
100 1
100 2
99 1
98 1
98 2
98 3
97 1
96 1
95 1
Here there's no specific partition given right? Then how it's starting from 1 when it sees a different mark than a previous one @ruchiraj
@@BhishmaPrajapadhy from the question asked it is understood that partitionBy and orderBy marks.
Else dense_rank and rank output won't be the same
Bro has a PhD in spark..❤
👏👏👏👏
Spark core -Rdd (flexible)
high level apis-
Df and Spark sql (easy to write query)
Transformation n action
Spark submit process
Deployment modes
Types of transformation
Repartition n coalesce
Methods for schema enforcement - ddl, struct
Consecutive wins in sql