@@TechWithViresh I don't think dataframes and datasets are under the hood powered by RDD.. can you please share any citation to the above claim? .. thanks..
I have one common doubt, We could see spark is a cluster computing technique so spake job will be splited and sent across various node in cluster and processed in parallel and get us an output so here my doubt is while job splited and sent to nodes whether data to be processed and program code also will be sent? Please clarify.
So , distributed systems work on the architectural theme of sending code to the data, which the backbone and the breakthrough concept for handling of terabytes of data
I personally liked your videos. can you mention your linkedin?
Hi, now We have Tungsten which uses encoders for serilisaztion. SO now still we should use Kyro for serlization or tungsten will take care of it?
Hi i have one doubt, in this performance tuning tips only when we use RDD?
Under the hood everything is red, be it dataset or df
@@TechWithViresh I don't think dataframes and datasets are under the hood powered by RDD.. can you please share any citation to the above claim? .. thanks..
@@onbootstrap RDD is building block of spark. No matter which abstraction dataframe or dataset we use, internally final computation is done on RDD..
very useful info
I have one common doubt, We could see spark is a cluster computing technique so spake job will be splited and sent across various node in cluster and processed in parallel and get us an output so here my doubt is while job splited and sent to nodes whether data to be processed and program code also will be sent? Please clarify.
So , distributed systems work on the architectural theme of sending code to the data, which the backbone and the breakthrough concept for handling of terabytes of data
Thanks