PySpark Tutorial

แชร์
ฝัง
  • เผยแพร่เมื่อ 31 ธ.ค. 2024

ความคิดเห็น • 549

  • @stingfiretube
    @stingfiretube 11 หลายเดือนก่อน +96

    This man is singlehandedly responsible for spawning data scientists in the industry.

  • @MSuriyaPrakaashJL
    @MSuriyaPrakaashJL ปีที่แล้ว +27

    I am happy that I completed this video in one sitting

  • @yitezeng1035
    @yitezeng1035 2 ปีที่แล้ว +15

    I have to say, it is nice and clear. The pace is really good as well. There are many tutorials online that are either too fast or too slow.

  • @shritishaw7510
    @shritishaw7510 3 ปีที่แล้ว +90

    Sir Krish Naik is an amazing tutor, learned a lot about statistics and data science from his channel

  • @alireza2295
    @alireza2295 3 หลายเดือนก่อน +2

    This video provides an excellent starting point for the journey-clear, concise, and incredibly efficient. Great job!

  • @candicerusser9095
    @candicerusser9095 3 ปีที่แล้ว +30

    Uploaded at the right time. I was looking for this course. Thank you so much.

  • @anikinskywalker7127
    @anikinskywalker7127 3 ปีที่แล้ว +364

    Why are u uploading the good stuff during my exams bro

  • @oiwelder
    @oiwelder 2 ปีที่แล้ว +13

    0:52:44 - complementing Pyspark Groupby And Aggregate Functions
    df3 = df3.groupBy(
    "departaments"
    ).agg(
    sum("salary").alias("sum_salary"),
    max("salary").alias("max_salary"),
    min('salary').alias("min_salary")
    )

  • @nagarjunp23
    @nagarjunp23 3 ปีที่แล้ว +29

    You guys are literally reading everyone's mind. Just yesterday I searched for pyspark tutorial and today it's here. Thank you so much. ❤️

  • @baneous18
    @baneous18 ปีที่แล้ว +7

    42:17 Here the 'Missing values' is only replacing in the 'Name' column not anywhere else. even if I am specifying the columns names as 'age' or 'experience', it's not replacing the null values in those columns

    • @Star.22lofd
      @Star.22lofd ปีที่แล้ว

      Lemme know if you get the answer

    • @WhoForgot2Flush
      @WhoForgot2Flush 3 หลายเดือนก่อน

      Because they are not strings. If you cast the other columns to strings it will work as you expect, but I wouldn't do that just keep them as ints.

  • @ygproduction8568
    @ygproduction8568 3 ปีที่แล้ว +104

    Dear Mr Beau, thank you so much for amazing courses on this channel.
    I am really grateful how such invaluable courses are available for free.

    • @sunny10528
      @sunny10528 2 ปีที่แล้ว +5

      Please thank Mr Krish Naik

  • @mohandev7385
    @mohandev7385 3 ปีที่แล้ว +24

    I didn't expect krish.... Amazingly explained

  • @dataisfun4964
    @dataisfun4964 2 ปีที่แล้ว +8

    Hi krishnaik,
    All I can say is just beautiful, I followed from start to finish, and you were amazing, was more interested in the transformation and cleaning aspect and you did justice, I realize some line of code didn't work as yours but all thanks to Google for the rescue.
    This is a great resource for introduction to PySpark, keep the good work.

  • @lakshyapratapsigh3518
    @lakshyapratapsigh3518 3 ปีที่แล้ว +14

    VERY MUCH HAPPY IN SEEING MY FAVORITE TEACHER COLLABORATING WITH THE FREE CODE CAMP

  • @MiguelPerez-nv2yw
    @MiguelPerez-nv2yw 2 ปีที่แล้ว +6

    I just love how he says
    “Very very simple guys”
    And it turns out to be simple xD

  • @arturo.gonzalex
    @arturo.gonzalex 2 ปีที่แล้ว +45

    IMPORTANT NOTICE:
    the na.fill() method now works only on subsets with specific datatypes, e.g. if value is a string, and subset contains a non-string column, then the non-string column is simply ignored.
    So now it is impossible to replace all columns' NaN values with different datatypes into one.
    Other important question is: how come values in his csv file are treated as strings, if he has set inferSchema=True?

    • @kinghezzy
      @kinghezzy 2 ปีที่แล้ว +2

      This observation is true.

    • @aadilrashidnajar9468
      @aadilrashidnajar9468 2 ปีที่แล้ว

      Indeed i also observed the same issue, now don't set inferSchema=True while reading the csv to RDD then .na.fill() will work fine

    • @sathishp3180
      @sathishp3180 ปีที่แล้ว +2

      Yes, I found the same.
      Fill won't work if the data type of filling value is different from the columns we are filling. So preferable to fill 'na' in each column using dictionary as below:
      df_pyspark.na.fill({'Name' : 'Missing Names', 'age' : 0, 'Experience' : 0}).show()

    • @aruna5472
      @aruna5472 ปีที่แล้ว +1

      Correct, even if we give value using dictionary like @Sathish P, If those data type are not string, it will ignore the value, once again, we need to read csv without inferSchema=True, may be instructor missed it to say that missing values applicable only for the string action( Look 43:03 all string ;-) ) . But this is good material to follow, I appreciate the good help !

    • @gunjankum
      @gunjankum ปีที่แล้ว

      Yes i found the same thing

  • @farees96
    @farees96 2 ปีที่แล้ว +1

    Hvala!

  • @vivekadithyamohankumar6134
    @vivekadithyamohankumar6134 3 ปีที่แล้ว +30

    I ran into an issue while importing pyspark(Import Error) in my notebook even after installing it within the environment. After doing some research, I found that the kernel used by the notebook, would be the default kernel, even if the notebook resides within virtual env. We need to create a new kernel within the virtual env, and select that kernel in the notebook.
    Steps:
    1. Activate the env by executing "source bin/activate" inside the environment directory
    2. From within the environment, execute "pip install ipykernel" to install IPyKernel
    3. Create a new kernel by executing "ipython kernel install --user --name=projectname"
    4. Launch jupyter notebook
    5. In the notebook, go to Kernel > Change kernel and pick the new kernel you created.
    Hope this helps! :)

    • @yashdhaga7047
      @yashdhaga7047 3 หลายเดือนก่อน

      Thank you so much!

  • @arjitsrivastav555
    @arjitsrivastav555 หลายเดือนก่อน

    Krish Naik has pretty much nailed it in this video. Loved it👏

  • @IvanSedov-i7f
    @IvanSedov-i7f 2 ปีที่แล้ว +13

    Прекрасное видео и прекрасная манера подачи материала. Большое спасибо!

  • @JackSparrow-bj5ul
    @JackSparrow-bj5ul 9 หลายเดือนก่อน

    Thank you so much @Krish Naik for bringing this amazing content. tutorial has really helped me clearing few concepts and really thoughtful hands-0n explanation. Hats-off to the FCC team. Looking forward to your channel @Krish.

  • @SameelJabir
    @SameelJabir 2 ปีที่แล้ว +7

    Such an amazing explanation.
    For a beginner: 1.50!hours really worth...
    You nailed it in a way with very simple examples In high professional way....
    Huge Hatsoff

  • @SporteeGamer
    @SporteeGamer 3 ปีที่แล้ว +8

    Thank you so much to give us these type of courses for free

  • @MöbiusuiböM
    @MöbiusuiböM 6 หลายเดือนก่อน +1

    15:20 - lesson 2
    31:35 - lesson 3

  • @TheBarkali
    @TheBarkali 2 ปีที่แล้ว +2

    Dear Krish. This is only W.O.N.D.E.R.F.U.L.L 😉.
    Thanks so Much and thanks to professor Hayth.... who showed me the link to your training. Cheers to both of U guys

  • @lavanyaballem5085
    @lavanyaballem5085 ปีที่แล้ว

    Such an Amazing Explanation! you Nailed it KrishNaik

  • @alanhenry9850
    @alanhenry9850 3 ปีที่แล้ว +9

    Atlast krish naik sir in freecodecamp😍

  • @yashbhawsar0872
    @yashbhawsar0872 3 ปีที่แล้ว +8

    @Krish Naik Sir just to clarify at 26:33 I think the Name column min-max decided on the lexicographic order, not by index number.

    • @shankiiz
      @shankiiz ปีที่แล้ว

      yep, you are right!

  • @sharanphadke4954
    @sharanphadke4954 3 ปีที่แล้ว +34

    Biggest crossover : Krish Naik sir teaching for free code camp

  • @ujjawalhanda4748
    @ujjawalhanda4748 2 ปีที่แล้ว +9

    There is an update in na.fill(), any integer value inside fill will replace nulls from columns having integer data types and so for the case of string value as well.

    • @harshaleo4373
      @harshaleo4373 2 ปีที่แล้ว +1

      Yeah. If we are trying to fill with a string, it is filling only the Name column nulls.

    • @austinchettiar6784
      @austinchettiar6784 2 ปีที่แล้ว +3

      @@harshaleo4373 so whats the exact keyword to replace all null values?

  • @ludovicgardy
    @ludovicgardy ปีที่แล้ว +1

    Really great, complete and straight forward course. Thank you for this, amazing job

  • @ccuny1
    @ccuny1 3 ปีที่แล้ว +5

    Yet another excellent offering. Thank you so much.

  • @cherishpotluri957
    @cherishpotluri957 3 ปีที่แล้ว +7

    Krish Naik on FCC🤯🔥🔥

  • @bhatt_nikhil
    @bhatt_nikhil 10 หลายเดือนก่อน

    Really good compilation to get started with PySpark.

  • @tradeking3078
    @tradeking3078 3 ปีที่แล้ว +11

    At 26:37 , Min and Max values from a column of string data type were not based on the index where they were placed, but it is based on their ASCII values of the words ,their order of characters that are arranged within and the order is
    ' 0 < 9 < "A" < "Z" < "a" < "z" '.
    Min will be letter comes first and Max will be which comes last of all the characters, if two similar characters found, it moves to next character and checks and so on ...

  • @graenathan
    @graenathan 2 ปีที่แล้ว

    Thanks

  • @carlosrobertomoralessanche3632
    @carlosrobertomoralessanche3632 2 ปีที่แล้ว +1

    You dropped this king 👑

  • @akashk2824
    @akashk2824 3 ปีที่แล้ว +4

    Thank you so much sir, 100 % satisfied with your tutorial. Loved it.

  • @khangnguyendac7184
    @khangnguyendac7184 ปีที่แล้ว +1

    42:15 The Pyspark now have update the na.fill(). It could only fill up the "value type" matching with "column type". For example, in the video, the professor only could replace all 4 columns because all 4 "column type" is "string" as the same as "Missing value". This being explain in 43:02.

    • @adekunleshittu569
      @adekunleshittu569 7 หลายเดือนก่อน

      You have to loop through the columns

  • @sivakumarrajabather1140
    @sivakumarrajabather1140 9 หลายเดือนก่อน

    The session is really great and awesome. Excellent presentation. Thank you.

  • @RossittoS
    @RossittoS 3 ปีที่แล้ว +2

    Great content! Thanks! Regards from Brazil!!!

  • @zesky6654
    @zesky6654 5 หลายเดือนก่อน

    42:11 - Note: The fill.na function only replaces values of the same type as the replacement. So the code on the screen will only replace the NULL values in the 'Name' column.

  • @simple_bihari_babua
    @simple_bihari_babua ปีที่แล้ว +1

    This feels like it started in between, was there any previous video to it. Which explained the installation and other processes

  • @aliyusifov5481
    @aliyusifov5481 2 ปีที่แล้ว +4

    Thank you so much for an amazing tutorial session! Easy to follow

  • @siddhantbhagat7216
    @siddhantbhagat7216 2 ปีที่แล้ว +5

    I am very happy to see krish sir on this channel.

  • @nagarajannethi
    @nagarajannethi 3 ปีที่แล้ว +5

    🥺🥺🙌🙌❣️❣️❤️❤️❤️ This is what we need

  • @konstantingorskiy5716
    @konstantingorskiy5716 2 ปีที่แล้ว +4

    Used this video to prepare for the tech interview, hope it will help)))

    • @michasikorski6671
      @michasikorski6671 2 ปีที่แล้ว +1

      Is this enought to say that you know spark/databricks?

  • @Pg11001
    @Pg11001 ปีที่แล้ว

    At 42:23 there was a function called 'fill' of used and it only replacing the string type datatypes with other string datatype so if you are facing the issue of only replacing the rows data one or two places you go up cell in your python notebook(.ipynb) file and at the reading time set 'inferSchema=False' so it catches the the integral type data that is NULL when they are not defined as integer.
    Thanks for video.

  • @tech-n-data
    @tech-n-data 9 หลายเดือนก่อน +1

    42:11 As of 3/9/24 the na.fill or fillna will not fill integer colums with string.
    51:31 aslo df_pyspark.filter('Salary15000')

  • @dipakkuchhadiya9333
    @dipakkuchhadiya9333 3 ปีที่แล้ว +4

    I like it 👌🏻
    we request you to make video on blockchain programing.

  • @raghavsrivastava2910
    @raghavsrivastava2910 3 ปีที่แล้ว +2

    Surprised to see Krish Naik sir here ❤️

  • @saiajaygundepalli
    @saiajaygundepalli 3 ปีที่แล้ว +1

    Krish naik sir is teaching wow👍👍

  • @innovationscode9909
    @innovationscode9909 3 ปีที่แล้ว

    Massive. This is a GREAT piece. Well done. Keep going

  • @johanrodriguez241
    @johanrodriguez241 3 ปีที่แล้ว +5

    Finished!. But i still want to see the power of this tool.

  • @AlexFosterAI
    @AlexFosterAI 9 ชั่วโมงที่ผ่านมา

    this is so helpful... thank you so much. would really appreciate one on lakesail's PySail at somepoint in the future if possible! its basically spark but built on rust. much faster with significantly reduced costs... its pretty growing quite fast so far.

  • @ronakronu
    @ronakronu 3 ปีที่แล้ว +1

    nice to meet you krish sir😍

  • @barzhikevil6873
    @barzhikevil6873 3 ปีที่แล้ว +4

    For the filling exercise on minute 42:00 aprox, I cannot do it with integer type data, I had to use string data like you did. But them in the next exercise, the one on minute 44:00, the function won't run unless you use integer data for the columns you are trying to fill.

    • @Richard-DE
      @Richard-DE 3 ปีที่แล้ว +1

      @@caferacerkid you can try to read with/without inferSchema = True and check the schema, you will see the difference. Try to read again for Imputer.

  • @hariharan199229
    @hariharan199229 2 ปีที่แล้ว +1

    Thanks a ton for this wonderful Masterpiece. It helped me a lot!

  • @DonnieDidi1982
    @DonnieDidi1982 3 ปีที่แล้ว +3

    I was very much looking for this. Great work, thank you!

  • @thecaptain2000
    @thecaptain2000 ปีที่แล้ว +1

    in your example df_pyspark.na.fill('missing value').show() replace null values with "missing value" just in the "Name" column

  • @DuongTran-zh6td
    @DuongTran-zh6td 2 ปีที่แล้ว +1

    thank you from Vietnam

  • @arulmouzhiezhilarasan8518
    @arulmouzhiezhilarasan8518 3 ปีที่แล้ว +2

    Impeccable Teaching! Thanks!

  • @renadhc68
    @renadhc68 ปีที่แล้ว

    Brilliant project based tutorial

  • @jorge1869
    @jorge1869 3 ปีที่แล้ว +8

    The full installation of PySpark was omitted in this course.

  • @critiquessanscomplaisance8353
    @critiquessanscomplaisance8353 2 ปีที่แล้ว +2

    That for free is charity, litteraly! Thanks a lot!!!

  • @estelle9819
    @estelle9819 ปีที่แล้ว +1

    Thank you so much, this is incredibly helpful.

  • @Dr.indole
    @Dr.indole ปีที่แล้ว

    This video is pretty much amazing 😂

  • @mariaakpoduado
    @mariaakpoduado 2 ปีที่แล้ว +2

    what an amazing tutorial!

  • @Uboom123
    @Uboom123 3 ปีที่แล้ว +22

    Hey Krish, thanks for simple training on pyspark, can you add sample video merging data frame? And add rows to data frame?

  • @bhanu242629
    @bhanu242629 5 หลายเดือนก่อน

    Excellent explanation Bro... :)

  • @brown_bread
    @brown_bread 3 ปีที่แล้ว +16

    One can do slicing in PySpark not exactly the way it is done in Pandas.
    Eg.
    Syntax : df_pys.collect()[2:6]
    Output :
    [Row(Name='C', Age=42),
    Row(Name='A2', Age=43),
    Row(Name='B2', Age=15),
    Row(Name='C2', Age=78)]

    • @programming_duck3122
      @programming_duck3122 2 ปีที่แล้ว

      Thank you really useful

    • @rajatbhatheja356
      @rajatbhatheja356 2 ปีที่แล้ว

      However one thing is that take precaution while using collect. collect is an action and will execute your DAG.

  • @venkatkondragunta9704
    @venkatkondragunta9704 2 ปีที่แล้ว

    Hey Krish, Thank you so much for your efforts.. this is really helpful..

  • @porvitor
    @porvitor 2 ปีที่แล้ว

    Thank you so much for an amazing tutorial session!🚀🚀🚀

  • @ammadniazi2906
    @ammadniazi2906 ปีที่แล้ว +1

    Where you are setting up the environment variables for spark and Hadoop.

  • @RaviKiran_Me
    @RaviKiran_Me ปีที่แล้ว

    At 1:01:09, maximum salary you found is basically the maximum salary of each person in the departments he/she is working and it's not the maximum total salary of each person.

  • @ChaeWookKim-vd7uy
    @ChaeWookKim-vd7uy 3 ปีที่แล้ว +1

    I love this pyspark course!

  • @doreyedahmed
    @doreyedahmed 2 ปีที่แล้ว

    Thank you so much
    very nice explanation
    If you use pyspark, its consider we deal with Spark Apache

  • @Jschmuck8987
    @Jschmuck8987 ปีที่แล้ว

    Great video. Pretty much simple.

  • @simileoluwaaluko7582
    @simileoluwaaluko7582 2 ปีที่แล้ว

    Great man. Great! 👍🏼👍🏼👍🏼👍🏼

  • @larsybarz
    @larsybarz 3 หลายเดือนก่อน

    Thanks so much man. This is awesome

  • @bansal02
    @bansal02 ปีที่แล้ว

    Really thankful for the video.

  • @sanjaygstark
    @sanjaygstark 3 ปีที่แล้ว +3

    It's quite impressive 💫✨

  • @francescos7361
    @francescos7361 ปีที่แล้ว

    Pyspark is a code I like as a coder .

  • @PallabM-bi5uo
    @PallabM-bi5uo 2 ปีที่แล้ว +5

    Hi, thanks for this tutorial, If my dataset has 20 columns, why describe output is not showing in a nice table like the above? It is coming all distorted. Is there a way to get a nice tabular format like above for a large dataset?

  • @sushilkamble8379
    @sushilkamble8379 3 ปีที่แล้ว +1

    10:00 | Whoever is getting Exception: Java gateway process exited before sending the driver its port error, Install Java SE 8 (Oracle). The error will be solved.

    • @kazekagetech988
      @kazekagetech988 3 ปีที่แล้ว

      did you solve bro? im facing it now

    • @vitazamb3375
      @vitazamb3375 2 ปีที่แล้ว

      Me too. Did you manage to solve this problem?

  • @spoorthydevineni822
    @spoorthydevineni822 ปีที่แล้ว +1

    extraordinary content

  • @HariEaswaran98
    @HariEaswaran98 3 ปีที่แล้ว

    Thanks!

    • @ujirali4641
      @ujirali4641 2 ปีที่แล้ว

      You 5ioooppeweeetyiiop0

  • @soundcollective2240
    @soundcollective2240 3 ปีที่แล้ว

    This is pretty much a very useful video ;)
    thanks

  • @sukurcf
    @sukurcf ปีที่แล้ว

    26:34 I don't think it's based on index. I just tried changing the indices for min and max values for string. Looks like it's checking the chronological order.

  • @praveenkumare2157
    @praveenkumare2157 3 ปีที่แล้ว +2

    Atlast i found a precious one

  • @saurabhdakshprajapati1499
    @saurabhdakshprajapati1499 7 หลายเดือนก่อน +1

    Good tutorial, thanks

  • @crazynikhil3811
    @crazynikhil3811 3 ปีที่แล้ว +1

    Indians are the best teachers in the world. Thank you :)

  • @javierpatino4142
    @javierpatino4142 ปีที่แล้ว +1

    Good video brother.

  • @Poori1810
    @Poori1810 2 ปีที่แล้ว

    This is a great view on coding. Can you add some interview questions?

  • @anassrtimi3015
    @anassrtimi3015 2 ปีที่แล้ว +1

    Thank you for this course

  • @ARJUNKUMAR-cr1gq
    @ARJUNKUMAR-cr1gq 3 ปีที่แล้ว +1

    Welcome here sir🙏🙏

  • @aymenlamzouri3732
    @aymenlamzouri3732 2 ปีที่แล้ว +2

    Very nice video, one question is how do you get this help window that displays the input of the functions that you are using ?

    • @kinghezzy
      @kinghezzy 2 ปีที่แล้ว

      Tab button or you can use Shift + tab to see the documentation

  • @Nari_Nizar
    @Nari_Nizar 3 ปีที่แล้ว +1

    At 1:09:00 when you try to add Independent feature I get the below error:
    Py4JJavaError Traceback (most recent call last)
    in
    1 output = featureassembler.transform(trainning)
    ----> 2 output.show()
    C:\ProgramData\Anaconda3\lib\site-packages\pyspark\sql\dataframe.py in show(self, n, truncate, vertical)
    492
    493 if isinstance(truncate, bool) and truncate:
    --> 494 print(self._jdf.showString(n, 20, vertical))
    495 else:
    496 try:

  • @redrum4486
    @redrum4486 2 ปีที่แล้ว

    so pyspark is basically like normal python for crazy large datasets,, cool!