7:30 commenting at this ts. I have a 🧐 doubt, where have we difined sparksession? How was spark variable/object working without deining SparkSession() , im new to pyspark. Can you pls explain ?
i have one doubt ..can i clean the data in jupyter note books and then upload the file in pyspark?? cos im not that much familiar with pyspark commands
In cmd 11 I'm getting NameError: name 'IntegerType' is not defined and cmd 13 AttributeError: 'DataFrame' object has no attribute 'createOrReplaceTempview' ... can you help me?
@@learnbydoingit I heard read once and write many so if I used views in the first place like you does that mean I can write many scripts on nd fast query the table ?
@@learnbydoingit I saw others also facing the same issue in comments section just like me, I thought maybe it was a link issue. Can you tell the name of the channel, I'll search and join!
If is it possible can you make video on this use case
Take any sample data Solve this using ( Adf , Databricks , PySpark ) : I own a multi-specialty hospital chain with locations all across the world. My hospital is famous for vaccinations. Patients who come to my hospital (across the globe) will be given a user card with which they can access any of my hospitals in any location. Current Status: We maintain customers data in Country wise database due to local policies. Now with legal approvals to build centralized data platform, we need our Data engineering team to collate data from individual databases into single source of truth having cleaned standardized data. Business wants to generate a simple PowerBI report for top executives summarizing till date vaccination metrics. This report will be published and generated daily for the next 18 months. The 3 metrics mentioned below are required for the phase 1 release. Deliverables for assessment: Python code that does the below Data cleansing/exception handling Data merging into single source of truth Data transformations and aggregations Code should have unit testing Metrics needed: Total vaccination count by country and vaccination type % vaccination in each country (You can assume values for total population) % vaccination contribution by country (Sum of percentages add up to 100) Expected output format Metric 1: CountryName, VaccinationType, No. of vaccinations Metric 2: CountryName, % Vaccinated Metric 3: CountryName, % Contribution NOTE: End goal is to create data that can be consumed by PowerBI report directly. scope is 3 countries.we will get from each country. Initially you will receive a bulk load file for each country, post that you will receive daily incremental files for each country
Sir, Please make one video one whole flow of ADE Project... No need to explain practically.... Just wanted to learn whole flow from data ingestion till Power Bi .... I am confused between how we connect to DataBricks then how we connect to powerBi .. i didn't find any video like this.... Every video is short and to that point...plz explain what is the previous and next step in that video
Query for Top 10 Installs: %sql WITH total_installs AS( SELECT App, SUM(Installs) as total_install FROM Apps GROUP BY 1 ORDER BY 2 DESC), top_installs AS( SELECT App, row_number() OVER (ORDER BY total_install) as rnk FROM total_installs ) SELECT App FROM top_installs WHERE rnk < 11;
SELECT App,sum(Installs) as total_installs FROM apps GROUP BY App ORDER BY total_installs DESC LIMIT 10 I think here no need to use windows function because LIMIT can do the stuff smoothly
@@datawhiz_soumya your query will fail in case of tie in total installs, you will never get top 10 unique list in case of a tie....thats why i used windows function
@@RSquare2605 Okay I got your point. Actually I have not considered this scenario but if we put the tie scenario here then don't you think DENSE_RANK() will be more appropriate here than row_number() because let's say 3 apps have the same number of installs then we should display three of them right? Instead of 1st one as row_number will assign unique value to every row.
Bro I have one question if i want to put a project in my resume then how do i do it with project name n description n responsibilities Could you pls share like one two projects with documentation Its humble request bro
Hi, Could you please create a video to combine below 3 csv data files into one data frame dynamically File name: Class_01.csv StudentID Student Name Gender Subject B Subject C Subject D 1 Balbinder Male 91 56 65 2 Sushma Female 90 60 70 3 Simon Male 75 67 89 4 Banita Female 52 65 73 5 Anita Female 78 92 57 File name: Class_02.csv StudentID Student Name Gender Subject A Subject B Subject C Subject E 1 Richard Male 50 55 64 66 2 Sam Male 44 67 84 72 3 Rohan Male 67 54 75 96 4 Reshma Female 64 83 46 78 5 Kamal Male 78 89 91 90 File name: Class_03.csv StudentID Student Name Gender Subject A Subject D Subject E 1 Mohan Male 70 39 45 2 Sohan Male 56 73 80 3 shyam Male 60 50 55 4 Radha Female 75 80 72 5 Kirthi Female 60 50 55
Thank You Sir! You SAVED my mini Project😊
what an explanation, it is very clear and informative. Thank you so much, I am really learning by doing it.
Thank you so much for creating real time spark project! It's really help to me a lot.
Thank you so much for this video, can you please provide the code in the comments or description.
Nice explanation. Pls do more pyspark videos
Great job and nice explanation!
Project is awesome, just wanted to give a quick suggestion that if you can limit your "okay" after every sentence, it will be more helpful. 😅😅
Yeah I am working on this
No harm! still it needs OKAY!
Great Video
Awesome 😎
Really enjoying your work
Itni takkare maarne ke bad yeah ke Sahee video mila thanks 🙏
Do follow latest playlist
Thank you 🙏 you are doing very well
Best video
Thank you so much.
Astonishing
This was really helpful 👍
You give great content
Very much thnk you
Thank you
Tnq so much sir.
Thank you so much!!
7:30 commenting at this ts. I have a 🧐 doubt, where have we difined sparksession? How was spark variable/object working without deining SparkSession() , im new to pyspark. Can you pls explain ?
DataBricks not required to define ,it was handled internally by them
Thanks you for your tutorial..
I have question, what its tools in video tutorial by the way..
Thanks😊
Databricks
@@learnbydoingit thks mr.
Why to create as a temp view, you can do same on the ddataframe with pyspark, right?
Yes both are possible if you like sql then create view and do
please provide end to end project of GCP any migration or other
Bro can you give some suggestions what are real projects issues we face when we development.
Why I'm getting Installs column with null values to all rows even it has values....
Need to debug what's the code ...May be data type issue
This is really very helpful and amazing video but everything should be in pyspark code
Will make it
Okay 👍🏻
Which is the algorithm used in this project
i have one doubt ..can i clean the data in jupyter note books and then upload the file in pyspark??
cos im not that much familiar with pyspark commands
No .. pyspark we use for larger data processing so u should learn that ...
Thank you for the project......sir can you please ping the dataset for the same......want to practice with you
Added Excel fine in description
@@learnbydoingit Thank you sir🙏🙏
In cmd 11 I'm getting NameError: name 'IntegerType' is not defined and cmd 13 AttributeError: 'DataFrame' object has no attribute 'createOrReplaceTempview' ... can you help me?
Check spelling
can you make a video about how to deploy and automate pyspark projects?
why did you create a view or temp table then started doing the analysis ?
Just to use sql query for analysis ...we can do without that also
@@learnbydoingit I heard read once and write many so if I used views in the first place like you does that mean I can write many scripts on nd fast query the table ?
Bro I thought also deployment will be there or Job run/schedule will be there. I was waiting and it got over.
Scheduling is easy will upload
It would be better if you share the colde.
Hi Sir,
Whatever the datset you provided in link that is in the xlsx format , and u used its location as .csv How is it possible
Is it xlsx format let me check ?
Added CSV file can u check
@@learnbydoingitlet me check again
Thank u for uploading the CSV document today.❤
I m confused that how the people were doing handson with xlsx file
Thanks for the clear explanation, can you provide excel sheet which used in this session ?
Please get it from telegram
Telegram link not working
@@alwaysbehappy1337 t.me/+Cb98j1_fnZs3OTA1
you missed the last question "top paid rating apps"
Pls do try if you can solve that
@@learnbydoingit trying
none of the telegram links are working, please fix it asap! thank you
Don't know what is the problem other are able to join.... Looks like telegram update issue
@@learnbydoingit I saw others also facing the same issue in comments section just like me, I thought maybe it was a link issue.
Can you tell the name of the channel, I'll search and join!
@@Darklord-uk6yi DataEngineers
could you please share which file using these videos?
Available in telegram
If is it possible can you make video on this use case
Take any sample data Solve this using ( Adf , Databricks , PySpark ) :
I own a multi-specialty hospital chain with locations all across the world. My hospital is famous for
vaccinations. Patients who come to my hospital (across the globe) will be given a user card with which
they can access any of my hospitals in any location.
Current Status:
We maintain customers data in Country wise database due to local policies. Now with legal approvals
to build centralized data platform, we need our Data engineering team to collate data from individual
databases into single source of truth having cleaned standardized data. Business wants to generate a
simple PowerBI report for top executives summarizing till date vaccination metrics. This report will be
published and generated daily for the next 18 months. The 3 metrics mentioned below are required for
the phase 1 release.
Deliverables for assessment:
Python code that does the below
Data cleansing/exception handling
Data merging into single source of truth
Data transformations and aggregations
Code should have unit testing
Metrics needed:
Total vaccination count by country and vaccination type
% vaccination in each country (You can assume values for total population)
% vaccination contribution by country (Sum of percentages add up to 100)
Expected output format
Metric 1: CountryName, VaccinationType, No. of vaccinations
Metric 2: CountryName, % Vaccinated
Metric 3: CountryName, % Contribution
NOTE: End goal is to create data that can be consumed by PowerBI report directly.
scope is 3 countries.we will get from each country. Initially
you will receive a bulk load file for each country, post that you will receive daily incremental files for each country
Thanks for sharing I will do that , 😃
Sir, Please make one video one whole flow of ADE Project... No need to explain practically.... Just wanted to learn whole flow from data ingestion till Power Bi .... I am confused between how we connect to DataBricks then how we connect to powerBi .. i didn't find any video like this.... Every video is short and to that point...plz explain what is the previous and next step in that video
Okay I will upload that..
@@learnbydoingit thank you... Plz upload it asap 🙏
Yes, I am also looking for it. Do you get any such video, please share its link ?
Bro i joined telegram channel but not able to find the dataset
It's there in file section
ok
Hi, can you add the dataset that was used in this session?
Pls join telegram
@@learnbydoingit Not working
Please can we get the data set ?
Available in telegram
tried to join but its not letting me@@learnbydoingit
What should be the name of this project
Telegram link not working
Pls do check latest video link
OK
CSV FILES are always in String datatype.
Yes
Query for Top 10 Installs:
%sql
WITH total_installs AS(
SELECT App, SUM(Installs) as total_install
FROM Apps
GROUP BY 1
ORDER BY 2 DESC),
top_installs AS(
SELECT App, row_number() OVER (ORDER BY total_install) as rnk
FROM total_installs
)
SELECT App
FROM top_installs
WHERE rnk < 11;
SELECT App,sum(Installs) as total_installs
FROM apps
GROUP BY App
ORDER BY total_installs DESC
LIMIT 10
I think here no need to use windows function because LIMIT can do the stuff smoothly
@@datawhiz_soumya your query will fail in case of tie in total installs, you will never get top 10 unique list in case of a tie....thats why i used windows function
@@RSquare2605 Okay I got your point. Actually I have not considered this scenario but if we put the tie scenario here then don't you think DENSE_RANK() will be more appropriate here than row_number() because let's say 3 apps have the same number of installs then we should display three of them right? Instead of 1st one as row_number will assign unique value to every row.
Hi bro. I like your content. Do you also provide support for data engineering job?
Pls do connect over telegram
Please can you share sample resume
Do check in channel azure data engineer resume is there
@@learnbydoingit Thank you sir!
plz do in pycharm
Pyapark in pycharm??
Bro I have one question if i want to put a project in my resume then how do i do it with project name n description n responsibilities
Could you pls share like one two projects with documentation
Its humble request bro
Sure I will do that
I dont have that much idea so could you pls share bro asap
If you dont mind
@@Mehtre108 for which role u are preparing?
@@learnbydoingit azure data engineer
@@Mehtre108 do connect link mentioned in description
Code?
is there any dataset link? also you are not explaining properly
Hey, do you have any data set link that you'd like to upload here. I'm looking for the same.
stop to say "in this particular"
Hi,
Could you please create a video to combine below 3 csv data files into one data frame dynamically
File name: Class_01.csv
StudentID Student Name Gender Subject B Subject C Subject D
1 Balbinder Male 91 56 65
2 Sushma Female 90 60 70
3 Simon Male 75 67 89
4 Banita Female 52 65 73
5 Anita Female 78 92 57
File name: Class_02.csv
StudentID Student Name Gender Subject A Subject B Subject C Subject E
1 Richard Male 50 55 64 66
2 Sam Male 44 67 84 72
3 Rohan Male 67 54 75 96
4 Reshma Female 64 83 46 78
5 Kamal Male 78 89 91 90
File name: Class_03.csv
StudentID Student Name Gender Subject A Subject D Subject E
1 Mohan Male 70 39 45
2 Sohan Male 56 73 80
3 shyam Male 60 50 55
4 Radha Female 75 80 72
5 Kirthi Female 60 50 55