DAG and Lazy Evaluation in spark

แชร์
ฝัง
  • เผยแพร่เมื่อ 17 เม.ย. 2023
  • In this video I have talked about dag and lazy evaluation in spark in great detail. please follow video entirely and ask doubt in comment section below.
    Directly connect with me on:- topmate.io/manish_kumar25
    For more queries reach out to me on my below social media handle.
    Follow me on LinkedIn:- / manish-kumar-373b86176
    Follow Me On Instagram:- / competitive_gyan1
    Follow me on Facebook:- / manish12340
    My Second Channel -- / @competitivegyan1
    Interview series Playlist:- • Interview Questions an...
    My Gear:-
    Rode Mic:-- amzn.to/3RekC7a
    Boya M1 Mic-- amzn.to/3uW0nnn
    Wireless Mic:-- amzn.to/3TqLRhE
    Tripod1 -- amzn.to/4avjyF4
    Tripod2:-- amzn.to/46Y3QPu
    camera1:-- amzn.to/3GIQlsE
    camera2:-- amzn.to/46X190P
    Pentab (Medium size):-- amzn.to/3RgMszQ (Recommended)
    Pentab (Small size):-- amzn.to/3RpmIS0
    Mobile:-- amzn.to/47Y8oa4 ( Aapko ye bilkul nahi lena hai)
    Laptop -- amzn.to/3Ns5Okj
    Mouse+keyboard combo -- amzn.to/3Ro6GYl
    21 inch Monitor-- amzn.to/3TvCE7E
    27 inch Monitor-- amzn.to/47QzXlA
    iPad Pencil:-- amzn.to/4aiJxiG
    iPad 9th Generation:-- amzn.to/470I11X
    Boom Arm/Swing Arm:-- amzn.to/48eH2we
    My PC Components:-
    intel i7 Processor:-- amzn.to/47Svdfe
    G.Skill RAM:-- amzn.to/47VFffI
    Samsung SSD:-- amzn.to/3uVSE8W
    WD blue HDD:-- amzn.to/47Y91QY
    RTX 3060Ti Graphic card:- amzn.to/3tdLDjn
    Gigabyte Motherboard:-- amzn.to/3RFUTGl
    O11 Dynamic Cabinet:-- amzn.to/4avkgSK
    Liquid cooler:-- amzn.to/472S8mS
    Antec Prizm FAN:-- amzn.to/48ey4Pj

ความคิดเห็น • 115

  • @tnmyk_
    @tnmyk_ 4 หลายเดือนก่อน +5

    Faadu explanation! Finally someone explained why Lazy evaluation actually works betters for Big Data processing. Amazing examples, very nice code! Loved the way you explained each line and each job step by step

  • @prasadBoyane
    @prasadBoyane 2 หลายเดือนก่อน +1

    I think spark considers 'sum' as action. hence 4 jobs. Greatt series !!!

  • @kavyabhatnagar716
    @kavyabhatnagar716 9 หลายเดือนก่อน

    Wow! Thank you for such great explanation. ❤

  • @prabhakarkumar8022
    @prabhakarkumar8022 3 หลายเดือนก่อน +1

    Awesome bhaiyaji!!!!!

  • @prabhatgupta6415
    @prabhatgupta6415 ปีที่แล้ว

    He has mastered and crunched the spark.

  • @akhiladevangamath1277
    @akhiladevangamath1277 หลายเดือนก่อน +1

    Thank you Thank you Thank you Manish for this video✨✨✨

  • @prathapganesh7021
    @prathapganesh7021 3 หลายเดือนก่อน

    Thank you so much for clarifying my doubts 🙏

  • @Watson22j
    @Watson22j ปีที่แล้ว +3

    wow! very nicely explained. Thank you! :)

  • @rishav144
    @rishav144 ปีที่แล้ว +1

    great video Manish bro

  • @SanjayKumar-rw2gj
    @SanjayKumar-rw2gj 19 วันที่ผ่านมา

    Truly impressed Manish bhai. Great explanation as you mentioned already "Itna detail mein kahin nhi milega"

  • @bhavindedhia3976
    @bhavindedhia3976 3 หลายเดือนก่อน +1

    amazing content

  • @pramod3469
    @pramod3469 ปีที่แล้ว

    very well explained...thanks Manish

  • @manojkaransingh5848
    @manojkaransingh5848 ปีที่แล้ว

    @wow...!..v.nice bro

  • @arijitsamaddar268
    @arijitsamaddar268 2 หลายเดือนก่อน +1

    bohot sahi explanation !

  • @krishnavamsirangu1727
    @krishnavamsirangu1727 2 หลายเดือนก่อน

    Hi Manish
    Thanks for explaining the concept in detail by running the code.
    I have understood the concept of dag ,lazy evaluation and optimization.

  • @akashprabhakar6353
    @akashprabhakar6353 2 หลายเดือนก่อน

    Awesome lecture...thanks a lot!

  • @abinashpandit5297
    @abinashpandit5297 10 หลายเดือนก่อน

    Very good Bhaiya. Aaj bhaut kuch isme indepth sikhne ko Mila jo phele pata hi nahi tha. Keep it up 👍

  • @SqlMastery-fq8rq
    @SqlMastery-fq8rq 3 หลายเดือนก่อน

    very well explained Sir..Thank You.

  • @ankitachauhan6084
    @ankitachauhan6084 หลายเดือนก่อน

    thank you ! great teaching style

  • @yugantshekhar782
    @yugantshekhar782 3 หลายเดือนก่อน

    Great explanation sir, really helpful!

  • @220piyush
    @220piyush 2 หลายเดือนก่อน

    Maza aa gya lekin video dekh ke... Wahhh❤

  • @aasthagupta9381
    @aasthagupta9381 12 วันที่ผ่านมา

    You are an excellent teacher, you make lectures so interesting! ye answer dekar to interview ko sikha denge :D

  • @ChandanKumar-xj3md
    @ChandanKumar-xj3md ปีที่แล้ว +2

    "Job kaise create hota hai?" ye question pehle kabi clear nai hua tha but thanks Manish for clearing this out and add on was lazy evaluation understanding. 👍

    • @vsbnr5992
      @vsbnr5992 ปีที่แล้ว

      NameError: name 'flight_data_repartition' is not defined what to do in this case even i import functions and types from pyspark
      please I stuck here

  • @choubeysumit246
    @choubeysumit246 หลายเดือนก่อน

    one Action one job is true for rdd api only. one action in dataframe or dataset can lead to multiple actions being generated internally. or sue to adaptive query executions as well multiple jobs are created in databricks which you can see using describe method

  • @souradeep.chatterjee
    @souradeep.chatterjee 3 หลายเดือนก่อน

    Detailed Explanation. Better than paid lectures.

  • @a26426408
    @a26426408 3 หลายเดือนก่อน

    Very well explained.

  • @mission_possible
    @mission_possible 10 หลายเดือนก่อน

    Thanks for the session and Please make video on Spark Lineage

  • @abhilovefood4102
    @abhilovefood4102 11 หลายเดือนก่อน

    Sir ur teaching is good

  • @arpitchaurasia5132
    @arpitchaurasia5132 4 หลายเดือนก่อน

    bhai gajabe padate ho yar maja hi a gya yar

  • @tahiliani22
    @tahiliani22 3 หลายเดือนก่อน +1

    Awesome. By the way, do we know why its creating 4 Spark Jobs instead of 3 ?

  • @mmohammedsadiq2483
    @mmohammedsadiq2483 9 หลายเดือนก่อน +1

    I have confusion? read and inferSchema are typically used with Spark's DataFrame API, which is part of Spark SQL. They are not transformations or actions ,part of the logical and physical planning phase of Spark, which occurs before any actions are executed

  • @maurifkhan3029
    @maurifkhan3029 10 หลายเดือนก่อน +4

    I too got confused as to why sometimes number of jobs as more or less than Actions. Try clearing the state using menu option run ->clear state and option and then run the cell again which has code from reading of file till all the things you want to perform . I think Data bricks intelligently stores state of system and later when you run same read command the Jobs count might not match
    I tried this and it seems to be working

    • @jatinyadav6158
      @jatinyadav6158 5 หลายเดือนก่อน +4

      Jobs count is right it is 4 only because sum() function is an action, which I guess Manish missed by mistake. Btw @Manish thank you so much for the amazing course.

    • @deepanshuaggarwal7042
      @deepanshuaggarwal7042 3 หลายเดือนก่อน

      @@jatinyadav6158 If 'sum' is an action then why it didn't create a job before adding 'show' codeline ?

    • @jatinyadav6158
      @jatinyadav6158 2 หลายเดือนก่อน +2

      @deepanshuaggarwal7042 yes sum is an action, I am not sure why it didn't show a job earlier

  • @rohitjhunjhunwala9174
    @rohitjhunjhunwala9174 วันที่ผ่านมา

    One thing, spark.read is a transformation not action. Is it an action because we included any options? Pls clarify

  • @ruinmaster5039
    @ruinmaster5039 11 หลายเดือนก่อน

    Bro Plese add summery at the end.

  • @SanjayKumar-rw2gj
    @SanjayKumar-rw2gj 19 วันที่ผ่านมา

    Is there any cheat sheet to know what all are transformations and actions, like read is a action whereas filter is a transformation?

  • @jasvirsinghwalia401
    @jasvirsinghwalia401 หลายเดือนก่อน +1

    Sir Read an inferschema to Transformations hai na and not actions? to inki alag jobs kyu bani hai?

  • @AmitSharma-ow8wm
    @AmitSharma-ow8wm ปีที่แล้ว

    waiting for ur next vidio...

    • @AmitSharma-ow8wm
      @AmitSharma-ow8wm ปีที่แล้ว

      @@rampal4570 is it true bro

    • @manish_kumar_1
      @manish_kumar_1  ปีที่แล้ว

      Aaj aa jayega

    • @vsbnr5992
      @vsbnr5992 ปีที่แล้ว

      @@AmitSharma-ow8wm NameError: name 'flight_data_repartition' is not defined what to do in this case even i import functions and types from pyspark
      please I stuck here

  • @krushitmodi3882
    @krushitmodi3882 11 หลายเดือนก่อน

    Sir please ye series thodi jaldi finish karo taki ham interview de sake mene apki puri channel dekhli hai
    Thank you

  • @prateekpawar1871
    @prateekpawar1871 10 หลายเดือนก่อน

    Do you have theory notes for spark?

  • @DevendraYadav-yz2so
    @DevendraYadav-yz2so 9 หลายเดือนก่อน

    Databricks community ko kaise use karege, Spark kaise setup karege databricks ke sath. Please ye bata dijiye so that code write kr sake

  • @user-ww6yf3iq8q
    @user-ww6yf3iq8q 3 หลายเดือนก่อน

    Because of group by jobs is created

  • @deepaliborde25
    @deepaliborde25 6 หลายเดือนก่อน

    where is the practical session link ?

  • @pramod3469
    @pramod3469 ปีที่แล้ว

    is lazy evaluation consider the partition also
    like after we have applied orderby on salary col and now we want to show only first two highest salary
    so will lazy evaluation also works here spark will process only that partition which has these two salary records or it will process all partitions and then extract first two highest salary record for us

    • @manish_kumar_1
      @manish_kumar_1  ปีที่แล้ว +1

      Yes until you write .head(2) for 2 highest record your process will not start although in backend it will create DAG.

  • @user-gt3pi6ir5u
    @user-gt3pi6ir5u 3 หลายเดือนก่อน

    any idea now, where the 4th job came from?

  • @ankitas4019
    @ankitas4019 3 หลายเดือนก่อน

    Where he explained about fligh data download

  • @abhishekchaturvedi9855
    @abhishekchaturvedi9855 7 หลายเดือนก่อน

    Hello Manish.
    When you mentioned the sql query gets optimized by spark. Just wanted to know will it help improve the execution time if we use the optimized query in our code itself so that spark need not do it ?

    • @manish_kumar_1
      @manish_kumar_1  7 หลายเดือนก่อน +1

      Spark optimization is very limited. So as a developer we should write optimized code to run our process faster

  • @vaibhavdimri7419
    @vaibhavdimri7419 หลายเดือนก่อน

    Sir apko samjh aya ki ek action hit karne par 2 jobs kaise create hui?

  • @techworld5477
    @techworld5477 4 หลายเดือนก่อน

    Hi Sir..jab main yeh code run kar raha hoon I am getting error as--name 'col' is not defined
    isko kaise solve kare?

  • @ChetanSharma-oy4ge
    @ChetanSharma-oy4ge 4 หลายเดือนก่อน

    I am trying to find , why 4 jobs are generating here although we have provided only 3 actions

  • @anirbanadhikary7997
    @anirbanadhikary7997 ปีที่แล้ว

    Aj apne interview questions bataya Nehi.

    • @manish_kumar_1
      @manish_kumar_1  ปีที่แล้ว

      Basic questions would be there. Like what is DAG and what is edges and vertices in it.

  • @chandanpatra1053
    @chandanpatra1053 4 หลายเดือนก่อน

    ek code likha hai using spark .usse dekhkar kese bataya ja sakta hai ki wo code 'action' hai ya 'transformation' hai.

    • @manish_kumar_1
      @manish_kumar_1  4 หลายเดือนก่อน

      Aapko google karke pata karna chahiye ki kon kon se actions hai. Rest are transformation

  • @ordinary_indian
    @ordinary_indian 5 หลายเดือนก่อน

    where to find the files ? I just have started the course

  • @aditya_1005
    @aditya_1005 11 หลายเดือนก่อน +1

    well explained.....Sir could you please clarify, 3 actions and 4 jobs created?

    • @manish_kumar_1
      @manish_kumar_1  11 หลายเดือนก่อน

      Aapke 3 actions me 4 jobs create hue hai?
      Aapne show use kara hai? And aap apna code v paste kar dijiye comment section me

    • @hazard-le7ij123
      @hazard-le7ij123 9 หลายเดือนก่อน

      @@manish_kumar_1 Aapne jo code likha hai usme bhi 4 jobs create hue hain. Can you explain that?
      Below is my code and same thing is happening. 4 Jobs are getting created. Stage is getting skipped but why do we have an extra job with 4 diff Job Ids?
      from pyspark.sql import SparkSession
      from pyspark.sql.functions import *
      spark = SparkSession.builder.master('local[5]') \
      .appName("Lazy Evaluation internal working") \
      .getOrCreate()
      flight_data = spark.read.format("csv")\
      .option("header","true")\
      .option("inferSchema","true")\
      .load("D:\\Spark\\flight_data.csv")
      flight_data_repartition = flight_data.repartition(3)
      us_flight_data = flight_data.filter(col("DEST_COUNTRY_NAME")=='United States')
      us_india_data = us_flight_data.filter((col("ORIGIN_COUNTRY_NAME")=='India') | (col("ORIGIN_COUNTRY_NAME")=='Singapore'))
      total_flight_ind_sing = us_india_data.groupby("DEST_COUNTRY_NAME").sum("count")
      total_flight_ind_sing.show()
      input("Enter to terminate")

  • @chethanmk5852
    @chethanmk5852 4 หลายเดือนก่อน

    Why do we have 4 jobs when we are using only 3 actions in the application??

  • @prabhatsingh7391
    @prabhatsingh7391 10 หลายเดือนก่อน

    Hi Manish Bhaiya, in the code snippet you told there are three actions in this applications(read, infer schema and show) but in spark ui there are 4 jobs created ,can you please explain this.

    • @manish_kumar_1
      @manish_kumar_1  10 หลายเดือนก่อน

      1 job skip hua hoga. Agar data Kam hai to explain karke dekhiye 3 aana chahiye

  • @abhayjr11
    @abhayjr11 หลายเดือนก่อน

    Bhai iske phle wala video dedo, mujhe mil nhi rha hai..

  • @avanibafna6207
    @avanibafna6207 ปีที่แล้ว

    In my case same code has created 5 jobs?I have import col so it will also be treated as action and new job will be created is it so?

    • @manish_kumar_1
      @manish_kumar_1  ปีที่แล้ว

      Can you please paste your code in comment section

    • @avanibafna6207
      @avanibafna6207 ปีที่แล้ว

      @@manish_kumar_1 from pyspark.sql.functions import col
      flight_data=spark.read.format("csv")\
      .option("header","true")\
      .option("inferSchema","true")\
      .load("dbfs:/FileStore/tables/flight_data.csv")
      flight_data_reparition=flight_data.repartition(3)
      us_flight_data=flight_data_reparition.filter("DEST_COUNTRY_NAME='United States'")
      us_india_data=us_flight_data.filter((col("ORIGIN_COUNTRY_NAME")=='India')|(col("ORIGIN_COUNTRY_NAME")=='Singapore'))
      total_flight_ind_sing=us_india_data.groupby("DEST_COUNTRY_NAME").sum("count")
      total_flight_ind_sing.show()
      (5) Spark Jobs
      Job 22 View(Stages: 1/1)
      Job 23 View(Stages: 1/1)
      Job 24 View(Stages: 1/1)
      Job 25 View(Stages: 1/1, 1 skipped)
      Job 26 View(Stages: 1/1, 2 skipped)
      flight_data:pyspark.sql.dataframe.DataFrame = [DEST_COUNTRY_NAME: string, ORIGIN_COUNTRY_NAME: string ... 1 more field]
      flight_data_reparition:pyspark.sql.dataframe.DataFrame = [DEST_COUNTRY_NAME: string, ORIGIN_COUNTRY_NAME: string ... 1 more field]
      us_flight_data:pyspark.sql.dataframe.DataFrame = [DEST_COUNTRY_NAME: string, ORIGIN_COUNTRY_NAME: string ... 1 more field]
      us_india_data:pyspark.sql.dataframe.DataFrame = [DEST_COUNTRY_NAME: string, ORIGIN_COUNTRY_NAME: string ... 1 more field]
      total_flight_ind_sing:pyspark.sql.dataframe.DataFrame = [DEST_COUNTRY_NAME: string, sum(count): long]
      +-----------------+----------+
      |DEST_COUNTRY_NAME|sum(count)|
      +-----------------+----------+
      | United States| 100|
      +-----------------+----------+

  • @raghavsisters
    @raghavsisters 11 หลายเดือนก่อน

    Why it is called acyclic?

    • @manish_kumar_1
      @manish_kumar_1  11 หลายเดือนก่อน

      Because it doesn't make cycle. If it's get into cycle or you can consider it as a circle then it will run endlessly

  • @mahendrareddych334
    @mahendrareddych334 3 หลายเดือนก่อน

    Bro, you are explaining superbly but why don't you explain in English. Everyone doesn't know Hindi. I don't know Hindi but watching your videos to understand the concepts but not getting it fully because it was explained in Hindi.

  • @ajaysinghjadoun9799
    @ajaysinghjadoun9799 ปีที่แล้ว

    please make a video in Windows function

  • @snehalkathale98
    @snehalkathale98 3 หลายเดือนก่อน

    Where I get CSV file

  • @asif50786
    @asif50786 ปีที่แล้ว

    How many more videos to come on Apache spark??

    • @manish_kumar_1
      @manish_kumar_1  ปีที่แล้ว

      Around 20-30. It's just beginning of spark

  • @shrikantpandey6401
    @shrikantpandey6401 ปีที่แล้ว

    could you provide notebook link?. It will good for hands on

    • @manish_kumar_1
      @manish_kumar_1  ปีที่แล้ว +2

      I don't provide notebook or pdf. Take notes and type every line of code by yourself. This will give you confidence

    • @sankuM
      @sankuM ปีที่แล้ว

      @@manish_kumar_1 this is indeed really great point! However, if possible, do share your own reference material for our benefit! Thanks! This series is really helpful, I've 4+ YoE in DE but never tried to go into spark internals, now while interviewing for switch, I'm definitely going to utilize all this! Keep 'em coming!! 🙌🏻👏🏻

  • @navjotsingh-hl1jg
    @navjotsingh-hl1jg ปีที่แล้ว

    bhai iski file de do iss lecture ki

  • @vsbnr5992
    @vsbnr5992 ปีที่แล้ว +1

    NameError: name 'flight_data_repartition' is not defined what to do in this case even i import functions and types from pyspark
    please I stuck here

    • @manish_kumar_1
      @manish_kumar_1  ปีที่แล้ว

      Seems like your df is not defined

    • @vsbnr5992
      @vsbnr5992 ปีที่แล้ว

      @@manish_kumar_1 ok working now thanks

  • @khurshidhasankhan4700
    @khurshidhasankhan4700 7 หลายเดือนก่อน

    Sir csv read karne pr two jobs kaise create ho raha hai, read only one action call kr rahe hain.
    If possible please clearify

    • @manish_kumar_1
      @manish_kumar_1  7 หลายเดือนก่อน

      Aur inferschema v use kiye honge. Isliye aa rha hoga

    • @khurshidhasankhan4700
      @khurshidhasankhan4700 7 หลายเดือนก่อน

      @@manish_kumar_1 thank you sir, can you please share the action list how many action hai spark me, if possible please share sir

  • @DevendraYadav-yz2so
    @DevendraYadav-yz2so 9 หลายเดือนก่อน

    Lec 7 Tak view kr liya Jo aap code dika rahe hai usko kise databricks and pyspark

    • @manish_kumar_1
      @manish_kumar_1  9 หลายเดือนก่อน

      Aapko practical and fundamentals sath me dekhne hai. First video me hi bataya tha

  • @akhilgupta2460
    @akhilgupta2460 ปีที่แล้ว

    Hi manish Bhai, Could u provide the flight data file.

    • @manish_kumar_1
      @manish_kumar_1  ปีที่แล้ว

      Kisi ek video me bataya tha. Please follow all videos in sequence

  • @Tushar0797
    @Tushar0797 ปีที่แล้ว

    bhai please vo extra job kese create hua ye doubt clear krdo

    • @manish_kumar_1
      @manish_kumar_1  ปีที่แล้ว

      Aap sql tab me jaake dekho. Kitne jobs skip hue hai. And share me your code and screenshot of the sql tab on LinkedIn or Instagram.

    • @rohitgade2382
      @rohitgade2382 10 หลายเดือนก่อน

      @@manish_kumar_1 abe chutiya tere video ka bol Raha he wo 😂

  • @shivakrishna1743
    @shivakrishna1743 ปีที่แล้ว

    Where can I get the flight_data.csv file? Please help.

  • @amlansharma5429
    @amlansharma5429 ปีที่แล้ว

    us_india_data = us_flight_data.filter((col("ORIGIN_COUNTRY_NAME") == 'India') |
    (col("ORIGIN_COUNTRY_NAME") == 'Singapore'))
    Ismein error bata raha hai : NameError: name 'col' is not defined
    Isko kaise define kare?

    • @AliKhanLuckky
      @AliKhanLuckky ปีที่แล้ว

      Col ko import karna padenga voh ek function hai toh import functions karo I think so

    • @manish_kumar_1
      @manish_kumar_1  ปีที่แล้ว

      Correct "from pyspark.sql.functtions import *"

    • @vsbnr5992
      @vsbnr5992 ปีที่แล้ว

      @@AliKhanLuckky NameError: name 'flight_data_repartition' is not defined what to do in this case even i import functions and types from pyspark
      please I stuck here

    • @vsbnr5992
      @vsbnr5992 ปีที่แล้ว

      @@manish_kumar_1 NameError: name 'flight_data_repartition' is not defined what to do in this case even i import functions and types from pyspark
      please I stuck here

  • @3mixmusic564
    @3mixmusic564 9 หลายเดือนก่อน

    Guru ibutton khi nhi aaya na idhr na udhr😂😂😂

    • @manish_kumar_1
      @manish_kumar_1  9 หลายเดือนก่อน +1

      Bhaari mistake ho gaya 😂

  • @Tanc369
    @Tanc369 3 หลายเดือนก่อน

    csv kaha milegi sir ?

    • @manish_kumar_1
      @manish_kumar_1  3 หลายเดือนก่อน

      2 playlist hai. Parallely dekhiye. Practical wale me data milega description me usko copy karke save Kar lijiye as csv

  • @manish_kumar_1
    @manish_kumar_1  ปีที่แล้ว

    Directly connect with me on:- topmate.io/manish_kumar25

  • @aishwaryamane5732
    @aishwaryamane5732 4 หลายเดือนก่อน

    Hi sir.. In which video series u have explained about schema @manish_kumar_1