@@purbitamallick7596 thanks for commenting. Are you asking for a specific project or paper that has been published in the past, or an idea for future research?
@@purbitamallick7596 www.tandfonline.com/doi/abs/10.1080/00220485.2017.1320607 This article was published "The authors provide step-by-step instructions on how to use FRED to compute the price elasticity of demand for motor vehicle fuels and gasoline" - Hopefully that's what you were looking for.
An inspiring tour de force with a very satisfying outcome. A hundred years ago the data didn’t exist; 25 years ago the data wasn’t in electronic form in one place; 15 years ago it would have taken a team weeks if not months to pull this all together and then format graphs; yesterday I had no idea it could now all be accomplished in much less than an hour. Now I’ll spend the next month or two going through but by bit, learning how to do this with my data. Thanks!
I've done basic data analysis manually in Excel for a long time, but these tools are insane. Your way of explaining things step by step is going to help me get some of these more modern tools into my toolkit and supercharge my productivity and marketability as I look for my next job.
Hello, i was working but the country names are not directly visible to me. They somehow appear like codes which cannot be deciphered which is what. Could you help me?
Rob you are a legend! This is content worth while ! Thank you for everything you do. I am learning Python for Data Analytics and your tutorials are great! They also apply for beginner level and cover different aspects and ways of using pandas and other libraries. Thank you !
Hello, I would like a help if you come across this comment. Have the datasets in FRED been updated because while I was working, all the values of unemployment states came out to be null and when i cleaned it for removing nulls, it dropped all data columns, just the headers remained. And the dataset started from 1760 and not the 1960 as mentioned in the video
It's great to see how others ppl work. There's always something to learn. In in my case I'm learning everything that you showed in this video. Thank you so much for doing it
The way you explain the entire walkthrough is just brilliant. You have the skill that many posses but your way is really unique like a statistical anomaly!
Another AWESOME video, man!!! Can't get enough of these, keep it up! By the way, if anyone else is having an error with pd.set_option('max_columns', 500), I solved it by changing it to pd.set_option('display.max_columns', 500). I think this may be related to what version of Pandas you have installed...
Thank you Rob. This was definitely an interesting walkthrough utilizing Fredapi for data exploration. I took this course to get an idea of what i can do with pandas for data exploration and you definitely delivered. Once again thanks.
Watching the video on a time when I should be watching some entertainment videos... It's bed time! BTW, this video of yours is going to take a couple of hours of mine, tomorrow... I've to go through it... Line by line, code by code... A couple of new things, which require some practice!
For whatever reason I was having issues with setting the max_columns to 500. I had to use: pd.set_option("display.max_columns", 500) Just putting it out there in case someone else gets an error about "Pandas matched multiple keys".
I just want to thank you for what you do. Some people are special in the way they convey information and teach and you just have it. You are going to help me succeed in this field! Keep it up man.
only with making a dataframe with the entire unemployment of the US by State, would have taken me at least 5hrs or even more, this is huge help for me and my future projects... you are awesome, please keep up the good content. greetings form Colombia.
nah man. it just takes time. at most it would've taken you an hour or two just to learn the syntax to import a clean dataset. that's it. once you understand why things happen, your understanding exponentially grows. I started in python about 2 years ago & now pandas is basically second nature for quick imports and data cleaning
Great video. At about 40:00 when you loop over the states to plot for each og them, you introduce an index that you increment manually. This can be avoided with the built in function enumerate().
You are the viewer that I need! Thanks so much for watching and giving feedback. Please spread the word to anyone else you think might enjoy my content!
Excellent video. Thank you. Before the last cell I added uemp_states = uemp_states.reindex(sorted(uemp_states.columns), axis=1) so that the states would show up in alphabetical order.
Yeah it so cool. I’m appreciated. Hoping next time you make a video of inter market analysis ( such as bond rate of many countries, stock exchanges, commodities price ) to find out why the interactions between them then we can predict something in the future. Thank again ❤❤❤
Excellent video, lucky to learn from a tutor like you.. as a beginner learnt using various data structures that too within a project.. keep doing great work! Participation Rate dataset seems like upgraded, so have to include: partiRate_df = partiRate_df.loc[partiRate_df['title'].str.contains('Labor Force Participation Rate for')] only then getting shape as 51x15
Glad it was helpful! I typically like using matplotlib if the plot is static and plotly or bokeh if the plot is interactive. I'm not sure if plotly would've been easier, but if you want to give it a shot and share the kaggle notebook I'd love to see it!
Nice data analysis video! Nomally the data contains not only the states, but also other metrics e.g. age periods etc. However if you wanna plot the states only (there are 51), do you set gridsize = (11, 5) and the last four cells will be empty since we iterate 51 times, or there is a clever way telling Matplotlib to display 10 x 5 grid of states and the 51th leave it alone right below the grid on the same plot?
I just stumbled upon your video and channel. Thank you for this one.. I am on windows machine and 'pip install fredapi > /dev/null' doesn't seem to work instead 'pip install fredapi' works with all the output and other one was 'pd.set_option('max_columns', 500)' doesn't seem to work, instead 'pd.set_option('display.max_columns', 500)' works on my jupyer notebook. Any suggestions what's wrong
Hey Tridib. Glad you enjoyed the video. The >/dev/null will only work on a linux based machine, it just supresses the output so it's ok to exclude it. I believe the max_columns setting has changed for the latest version of pandas so the way you are doing it is correct. Hope that helps!
Thanks so much Rob. Do you have a video on how you make your computer+software setup that you use to make these videos? Seeing you PiP with other windows + the live coding is amazingly useful. Thanks again and hope alls well.
Thanks for watching Peter. I've talked about my setup on stream before but maybe it would be a good idea to make an official video about it. I use conda for environments and pip for python packages. Running ubuntu with jupyterlab and vscode as my main IDEs but also love VIM.
Is the issue not even being able to install the python api? That doesn't seem that it should happen. I'm not sure if they changed something recently but will let you know.
The data in the unemployment dataframe seems to have changed quite a bit... I'm doing some successful troubleshooting, but it's taking me a while... Can you do a video on how to change the different .drop functions to work with the updated data? Thanks!
Thanks for letting me know. I’m surprised to hear it changed. Any chance you know what the format difference is? It should be the same for older dates. Let me know if you happen to find a solution and hopefully I’ll have time soon to look into it.
Had a similar problem, and came to this solution: # Concat as in the video, but don't perform the drop yet uemp_results = pd.concat(all_results, axis=1) #then create a function to iterate over all column titles and add them to a list if they are over 4 char. long cols_to_drop = [] for i in uemp_results: if len(i) > 4: cols_to_drop.append(i) #Then drop those cols. uemp_results = uemp_results.drop(columns = cols_to_drop, axis=1)
Great video as always 😃. I'm wondering at 32:06 if we can use " rename() " method and pass it the dict you just created. like uemp_states.rename(columns=id_to_state)
That's a great question. I believe it's the former (get top 1000 then sort) but I'd have to read the docs/code to know for sure. Let me know if you find out.
@@robmulla You are correct. I finally had a chance to play with this. As long as you include the "order_by='popularity'," you get the N most popular datasets. If you set "limit=100," you get a minimum popularity of 24 . If you then leave out the order_by, you get a different list, with 66 differences and a min popularity of 1.
I've tried to code as your guide on your TH-cam channel Jupyter Notebook but it showed an error like this " ModuleNotFoundError: No module named 'fredapi'", how can I fix it. tksROb
Using this and data like this to work on a QoL assessment across different countries! It really exposed me to new data and gave me a good headstart. I've made a few choropleth maps for the first time! Granted, not with my own geojson coords, but with the prebuilt countries & states. Still exciting though! Thanks!
Wow, that sounds like a really cool project. You should share it here or on twitter when you are done. I'd love to see what you did. Thanks for watching!
hi thanks for the great video. i have a question about the first column with dates. it's currently there as index and has no column title, i wonder how i can use that column to filter for rows that are after the year 1978. i tried to reset_index or call the ignore_index=true in pd.concat, but both resulted an error. appreciate any tips you can provide.
I think that is a series and not a dataframe. Reset index without ignoring the index would make it into a data frame. Check my intro to pandas video where I cover it in detail. Good luck!
Glad you enjoyed the video. Are you asking if there are any videos about how to setup a workspace locally similar to a kaggle notebook? If so I do discuss it in my jupyter notebook tutorial. Sorry if that doesn't answer your question.
@@robmulla hi, in Kaggle notebook. I tried to follow along, I didn’t see the function available for fred as u show, also in kaggle notebook, the variable input could not be repeated for next cell. Any short cut and set up will definitely helps. Thanks.
Thanks again for this video. Do you know if Fred has some time limitation when it comes to downloading the data? Sometimes it gives a run time error when downloading a lot of data at once. Also here’s a one-liner version of getting the data and concatenating. all_results = pd.concat([fred.get_series(myid).to_frame(myid) for myid in unemp_df.index],axis=1) Love Python’s simplicity! Thanks
Thank you very much for this tutorial we learned a lot!! But I have a question is it "OK" to scraping the website I mean in a legal way cause I want to put something similar in my resume but I wonder if it is okay with this method
Just a little bit of constructive criticizm: when something doesn't work like using the strip method in string manipulation, where you end up doing things in a different, better way, please explain a bit about why it didn't work.
Hey Rob! A bit unrelated, and I'm sure you get this question a ton, but how do you start out with Kaggle? I did the introductory and intermediate ML tutorials on Kaggle, but EDAs, non-tabular data, feature engineering, etc. seem incredibly unapproachable, and I'm not sure how I should go about learning them. In general, how should I go about improving my Kaggle skills? And how do Kaggle skills translate to real world ML jobs?
Thanks for sharing, I only had problems with the state names, because to me they appeared with Numbers(0,1,2,3...), and couldn´t manipulate since the "title.replace function". Thanks again.
You could have used enumerate to index the loop while counting the states. But either way I loved it. Been almost 12 years that I worked on financial data and I am now eager to get back. Drop me a line in the private mode and maybe we can collaborate.
@@robmulla it’s fun, maybe I will share a colab notebook with deep learning and shat that with you. I hate making presentations. And tutorials.I hate autocorrect. Always changes scientific on engineering terms. Sorry for writing colander instead of colab. I edited now when I noticed it.
Thank you. Particularly enjoyed the last big matrix plot you made, very useful. A little curious and as coninuation to what you did, would it be possible to share y scales, both primary and secondary y axis, through out all the plots you made? Noticed you used the option sharex=True, but one can't use e.g. sharey on the secondary y axis as far as i understand. Any quick solution to achieve that? Relevant if one want to easily compare across the States at the same time (my reason for asking).
Thanks for the question George. In this video Im working in a Kaggle notebook. I have a different video on jupyter notebooks that discusses the differences between this and a notebook. Also I have a link to this notebook and code in the video description.
Many Thanks!! I ran all on Kaggle perfectly till when i tried plotting with ploty [ px.line(uemp_states) ]. I had this value Error [ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().] any idea why and what I should try?
Thanks for the feedback. The notebook on kaggle actually needs to be updated for a few API changes. Someone put a comment on the notebook with the changes they made to get it working. I'll try to update it with those changes.
I updated the notebook and the latest version fixed the issues with the data causing it to not run end-to-end. It looks like they may have removed April 2020 from the unemployment dataset. Not sure why. Good luck.
in !pip install fredapi is showing error ERROR: Could not find a version that satisfies the requirement fredapi (from versions: none) ERROR: No matching distribution found for fredapi
Thanks for the feedback. I'm not sure what you mean by "in one go" but you can add multiline comments with tripple quotes """. Or if you mean split lines for code you can add a backslash like this \ Hope that helps.
Thanks Jackson. That's a great idea to look at stocks vs interest rate. I'd need to check if that's available through FRED or maybe I'd need to pull it from somewhere else.
Hello Rob, when I run the code: fred.search('S&P') then I am getting value error. I try many way but can't solve the error.please help ValueError: Bad Request. The value for variable api_key is not a 32 character alpha-numeric lower-case string.
Hey man Love your tuts , Question .. i have already researched and tried all kinds of codes to Display the full data of columns and rows and nothing works . i'm working on VSC(jupyterNotebook), i get this error ,"AttributeError: 'DataFrame' object has no attribute 'set_option'". im annoyed .
Hi Rob, I was not aware of your video before I extracted FRED data. Note SP500, NASDAQ, DJIA , Copper prices are unreliable in 1980s and until much later. I would appreciate any other data-source
Oh. I didn’t realize that FRED data was unreliable prior to the 1980s. You could maybe try yahoo finance? I’m curious, why are you interested in such old financial data?
hey, i tried running the same codes on google collaboratory and it shows an error after about every 3-4 blocks of code. is there any way i can prevent this- any packages i need to install? or is it just advisable to download Python for DS? also looking for safe websites where i can download it from!
Hey Varun. Sorry to hear you are having problems running it. Did you create your own API key from the website and change the code to take your key? It won’t work unless you do that.
Great conten! I didn't know the "twinx" method. It would be nice if you can make tutorials about advanced data visualization in Python.
twinx is pretty cool! I didn't learn about it until recently. Advanced python data viz video is a great idea. Thanks for watching!
Can you name a famous real-life project which contributes to the field of economic and statistical analysis?
@@purbitamallick7596 thanks for commenting. Are you asking for a specific project or paper that has been published in the past, or an idea for future research?
@@robmulla I am asking for a specific project
@@purbitamallick7596 www.tandfonline.com/doi/abs/10.1080/00220485.2017.1320607 This article was published "The authors provide step-by-step instructions on how to use FRED to compute the price elasticity of demand for motor vehicle fuels and gasoline" - Hopefully that's what you were looking for.
An inspiring tour de force with a very satisfying outcome. A hundred years ago the data didn’t exist; 25 years ago the data wasn’t in electronic form in one place; 15 years ago it would have taken a team weeks if not months to pull this all together and then format graphs; yesterday I had no idea it could now all be accomplished in much less than an hour. Now I’ll spend the next month or two going through but by bit, learning how to do this with my data. Thanks!
Thanks so much! Data has always existed though! 😀
I've done basic data analysis manually in Excel for a long time, but these tools are insane. Your way of explaining things step by step is going to help me get some of these more modern tools into my toolkit and supercharge my productivity and marketability as I look for my next job.
Can I get a data analysis job by knowing only some excel ?, please let me know
Hello, i was working but the country names are not directly visible to me. They somehow appear like codes which cannot be deciphered which is what. Could you help me?
Rob you are a legend! This is content worth while ! Thank you for everything you do. I am learning Python for Data Analytics and your tutorials are great! They also apply for beginner level and cover different aspects and ways of using pandas and other libraries. Thank you !
So glad you've found them helpful. 🙏
I rarely if not never comment, but this was a fantastic breakdown and explanation. Thank you!!
Hello, I would like a help if you come across this comment. Have the datasets in FRED been updated because while I was working, all the values of unemployment states came out to be null and when i cleaned it for removing nulls, it dropped all data columns, just the headers remained.
And the dataset started from 1760 and not the 1960 as mentioned in the video
Thanks!
Wow! This is awesome. I am contemplating doing PhD in Economics and this a great introduction to what I should expect. Thanks for your efforts.
Thanks for the feedback. don’t have a PhD in economics but I’m glad you found this helpful!
It's great to see how others ppl work. There's always something to learn. In in my case I'm learning everything that you showed in this video. Thank you so much for doing it
Glad you found this video helpful Wilson! 😀 Please consider sharing it with others your think might also learn from it.
I've watched A LOT of data science tutorials and these are extremely well done. Thanks for the great content!
Hey! Thanks a ton for that feedback. I really appreciate it!
Dude you're so clean at this. Unbelievable
Glad you liked it! Share with a friend to spread the word 😊
The way you explain the entire walkthrough is just brilliant. You have the skill that many posses but your way is really unique like a statistical anomaly!
Another AWESOME video, man!!! Can't get enough of these, keep it up!
By the way, if anyone else is having an error with pd.set_option('max_columns', 500), I solved it by changing it to pd.set_option('display.max_columns', 500). I think this may be related to what version of Pandas you have installed...
Yes. Good catch. That fixes it with newer pandas. thanks for watching!
Thank you so much . Was worried i could not follow along with the lesson when the error pop up.
I am getting an error -- no module named kaggle_secrets ... I searched about it failed to resolve it and run thee code ... plz help
Thank you Rob. This was definitely an interesting walkthrough utilizing Fredapi for data exploration. I took this course to get an idea of what i can do with pandas for data exploration and you definitely delivered. Once again thanks.
Watching the video on a time when I should be watching some entertainment videos... It's bed time! BTW, this video of yours is going to take a couple of hours of mine, tomorrow... I've to go through it... Line by line, code by code... A couple of new things, which require some practice!
TH-cam is the best Free University ever !! Amazing
You can learn a lot of TH-cam for sure.
This is great! Seems like I learn some new pandas tricks in every one of your videos. Thanks dude!
Never stop learning. I love it!
For whatever reason I was having issues with setting the max_columns to 500. I had to use:
pd.set_option("display.max_columns", 500)
Just putting it out there in case someone else gets an error about "Pandas matched multiple keys".
Thank you
Same
You legend, thanks :)
Thank you, I was so confused 😅
thanks!
I just want to thank you for what you do. Some people are special in the way they convey information and teach and you just have it. You are going to help me succeed in this field! Keep it up man.
only with making a dataframe with the entire unemployment of the US by State, would have taken me at least 5hrs or even more, this is huge help for me and my future projects... you are awesome, please keep up the good content. greetings form Colombia.
nah man. it just takes time. at most it would've taken you an hour or two just to learn the syntax to import a clean dataset. that's it. once you understand why things happen, your understanding exponentially grows. I started in python about 2 years ago & now pandas is basically second nature for quick imports and data cleaning
Wow. Great video. Awesome to see the iterative process you use (and hard to take notes on it, so thanks for the links).
Glad you found it helpful and thanks for watching!
Great video. At about 40:00 when you loop over the states to plot for each og them, you introduce an index that you increment manually. This can be avoided with the built in function enumerate().
Good point! I actually made a short about enumerate, and I don't use it myself when I should. 😂
Wow great content, 43:35 voila, so neat and aesthetic images. I fell in love with Python, I was scrolling TH-cam , wow .
Glad you liked the video!
Thank you. I do use a lot of financial data at my work. This is going to help a lot. The visualization techniques are great to adopt!
Glad it was helpful! Matplotlib can be extremely powerful for plotting.
Great content and pace you did it...particularly like what you did from "plot with plotly" and beyond, the data wrangling. Thanks for sharing.
Thanks Gisele 🙏 Plotly is awesome!
this is the data channel that I need
You are the viewer that I need! Thanks so much for watching and giving feedback. Please spread the word to anyone else you think might enjoy my content!
Excellent video. Thank you. Before the last cell I added
uemp_states = uemp_states.reindex(sorted(uemp_states.columns), axis=1)
so that the states would show up in alphabetical order.
Nice work! Thanks for sharing, that's a good trick.
Uooh, Amazing skills, such an inspiration for my studies,
Thanks for the content!
Happy to hear that!
Such amazing content please make more videos like this with new different types of Api I love to watch your 😊❤️
Thanks so much for the feedback! 🙏 Share the video with others you think might enjoy it.
Splendid 😂, I love how you tell the way to solve the questions
Glad you liked it! I'm trying my best.
Fascinating stuff man, thank you for this!
Glad you found it helpful
Absolutely love it. Thank you Rob!
My one of best youtuber love your work rob😍😍,
Thank u for everthing
Thanks @Rob Mulla, this video is very informative and motivating.
Thats a great tutorial. Thank you. Hope you get 100k subscribers.
Glad you found it helpful! 100k is my goal for 2023 😀
Awesome content, very helpful for Python learners!
Yeah it so cool. I’m appreciated. Hoping next time you make a video of inter market analysis ( such as bond rate of many countries, stock exchanges, commodities price ) to find out why the interactions between them then we can predict something in the future. Thank again ❤❤❤
Excellent video, lucky to learn from a tutor like you.. as a beginner learnt using various data structures that too within a project.. keep doing great work!
Participation Rate dataset seems like upgraded, so have to include:
partiRate_df = partiRate_df.loc[partiRate_df['title'].str.contains('Labor Force Participation Rate for')]
only then getting shape as 51x15
Glad it was helpful! Not sure about that situation.
Oh Man. Great Tutorial. Thank you.
Glad it was helpful Chizzle! If you can, share it with others you think might learn from it too.
Great Video! I learned a lot of tips and tricks from this analysis!
Great work sir, keep it up 🥳
Keep watching!
My gee these is great content ever
Great content and incredibly insightful and transparent
Brilliant content and explanation thanks. Definitely subscribed for more.
Thanks so much!
just commenting for the algo! thanks again for the great vids rob!
Just found this channel. Subscribed
Thanks for supporting! Tell your friends. 😊
Thank you. Another great tutorial. I enjoyed watching you fix matplotlib, the most. Couldn't that have been easier with plotly?
Glad it was helpful! I typically like using matplotlib if the plot is static and plotly or bokeh if the plot is interactive. I'm not sure if plotly would've been easier, but if you want to give it a shot and share the kaggle notebook I'd love to see it!
Great video. Thanks for this.
I appreciate that.
awesome explanation, thanks
Nice data analysis video! Nomally the data contains not only the states, but also other metrics e.g. age periods etc. However if you wanna plot the states only (there are 51), do you set gridsize = (11, 5) and the last four cells will be empty since we iterate 51 times, or there is a clever way telling Matplotlib to display 10 x 5 grid of states and the 51th leave it alone right below the grid on the same plot?
Good question. I believe it’s possible but it typically just use an even spaced grid. It’d need a drawing to see exactly what you mean.
I just stumbled upon your video and channel. Thank you for this one.. I am on windows machine and 'pip install fredapi > /dev/null' doesn't seem to work instead 'pip install fredapi' works with all the output and other one was 'pd.set_option('max_columns', 500)' doesn't seem to work, instead 'pd.set_option('display.max_columns', 500)' works on my jupyer notebook. Any suggestions what's wrong
Hey Tridib. Glad you enjoyed the video. The >/dev/null will only work on a linux based machine, it just supresses the output so it's ok to exclude it. I believe the max_columns setting has changed for the latest version of pandas so the way you are doing it is correct. Hope that helps!
Thanks so much Rob. Do you have a video on how you make your computer+software setup that you use to make these videos? Seeing you PiP with other windows + the live coding is amazingly useful. Thanks again and hope alls well.
Thanks for watching Peter. I've talked about my setup on stream before but maybe it would be a good idea to make an official video about it. I use conda for environments and pip for python packages. Running ubuntu with jupyterlab and vscode as my main IDEs but also love VIM.
Incredible video thanks for the inspiration
Thanks so much for the positive feedback.
Thank you! This was very informative!
Glad it was helpful!
Having a hard time installing the FredAPI - any thoughts as to why? Kaggle won't connect to it over the internet.
Is the issue not even being able to install the python api? That doesn't seem that it should happen. I'm not sure if they changed something recently but will let you know.
The data in the unemployment dataframe seems to have changed quite a bit... I'm doing some successful troubleshooting, but it's taking me a while... Can you do a video on how to change the different .drop functions to work with the updated data? Thanks!
Thanks for letting me know. I’m surprised to hear it changed. Any chance you know what the format difference is? It should be the same for older dates. Let me know if you happen to find a solution and hopefully I’ll have time soon to look into it.
Had a similar problem, and came to this solution:
# Concat as in the video, but don't perform the drop yet
uemp_results = pd.concat(all_results, axis=1)
#then create a function to iterate over all column titles and add them to a list if they are over 4 char. long
cols_to_drop = []
for i in uemp_results:
if len(i) > 4:
cols_to_drop.append(i)
#Then drop those cols.
uemp_results = uemp_results.drop(columns = cols_to_drop, axis=1)
@@pizzpie09 Hell Yea! Thank you
you are absolutely awesome bro👍
Thank you so much 😀
Great video as always 😃. I'm wondering at 32:06 if we can use " rename() " method and pass it the dict you just created. like uemp_states.rename(columns=id_to_state)
Thanks! I think rename can take a dictionary. That's a good point.
This is very useful, thank you very much!!
How do I label the left and right y axis?
Ax.set_ylabel() ?
9:40 If limit=1000 and sort_order='popularity', do you get the top 1000 sorted by popularity or the first 1000 sorted by popularity?
That's a great question. I believe it's the former (get top 1000 then sort) but I'd have to read the docs/code to know for sure. Let me know if you find out.
@@robmulla You are correct. I finally had a chance to play with this. As long as you include the "order_by='popularity'," you get the N most popular datasets. If you set "limit=100," you get a minimum popularity of 24 . If you then leave out the order_by, you get a different list, with 66 differences and a min popularity of 1.
I've tried to code as your guide on your TH-cam channel Jupyter Notebook but it showed an error like this "
ModuleNotFoundError: No module named 'fredapi'", how can I fix it. tksROb
only use this code: !pip install fredapi
Very good content. Download !!!
Glad you liked it. Thanks for watching. 🙂
Using this and data like this to work on a QoL assessment across different countries! It really exposed me to new data and gave me a good headstart. I've made a few choropleth maps for the first time! Granted, not with my own geojson coords, but with the prebuilt countries & states. Still exciting though! Thanks!
Wow, that sounds like a really cool project. You should share it here or on twitter when you are done. I'd love to see what you did. Thanks for watching!
hi thanks for the great video. i have a question about the first column with dates. it's currently there as index and has no column title, i wonder how i can use that column to filter for rows that are after the year 1978. i tried to reset_index or call the ignore_index=true in pd.concat, but both resulted an error. appreciate any tips you can provide.
I think that is a series and not a dataframe. Reset index without ignoring the index would make it into a data frame. Check my intro to pandas video where I cover it in detail. Good luck!
Nice video. I am curious how come you didn't use the pandas SQL commands?
Excellent, Congratulations
Thank you so much 😀
Great content!!!!
Any videos or content on how to setting up the Kaggle workspace like the link to functions and call out variable used above?
Thanks.
Glad you enjoyed the video. Are you asking if there are any videos about how to setup a workspace locally similar to a kaggle notebook? If so I do discuss it in my jupyter notebook tutorial. Sorry if that doesn't answer your question.
@@robmulla hi, in Kaggle notebook. I tried to follow along, I didn’t see the function available for fred as u show, also in kaggle notebook, the variable input could not be repeated for next cell. Any short cut and set up will definitely helps. Thanks.
Thanks again for this video. Do you know if Fred has some time limitation when it comes to downloading the data? Sometimes it gives a run time error when downloading a lot of data at once.
Also here’s a one-liner version of getting the data and concatenating.
all_results = pd.concat([fred.get_series(myid).to_frame(myid) for myid in unemp_df.index],axis=1)
Love Python’s simplicity!
Thanks
It could have a timeout if you hit it too many times. You may need to make it a loop and add a sleep for a second every iteration. Good luck.
Thank you very much for this tutorial we learned a lot!! But I have a question is it "OK" to scraping the website I mean in a legal way cause I want to put something similar in my resume but I wonder if it is okay with this method
I’m not able to give legal advice, but I haven’t had any issues with scraping sites yet.
Just a little bit of constructive criticizm: when something doesn't work like using the strip method in string manipulation, where you end up doing things in a different, better way, please explain a bit about why it didn't work.
Hi, thanks for this great video.
Thank YOU for watching mehdi.
Great work!
2:40 What is the ! for?
What is the > for?
What is > /dev/null ?
! Allows you to run bash commands in the notebook
>/dev/null just makes it so it doesn't print the output. You can also use the -q flag.
@@robmulla Thanks, much appreciated
I am also getting this error, did you manage to solve it?
I cant see the doc information when i press shift tab, or hover over the fred object. How can i fix this?
Hey Rob! A bit unrelated, and I'm sure you get this question a ton, but how do you start out with Kaggle? I did the introductory and intermediate ML tutorials on Kaggle, but EDAs, non-tabular data, feature engineering, etc. seem incredibly unapproachable, and I'm not sure how I should go about learning them. In general, how should I go about improving my Kaggle skills? And how do Kaggle skills translate to real world ML jobs?
Great vid !!!
I appreciate that!
This is fascinating! Hen do you decide which tool to use? Your work here is like 10 minutes in excel .
Enjoyed.Thank You
Thanks for sharing, I only had problems with the state names, because to me they appeared with Numbers(0,1,2,3...), and couldn´t manipulate since the "title.replace function". Thanks again.
That's interesting. Maybe the FREDapi changed? I'll have to look into it. There must be some mapping. Can you share a notebook with the issue?
You could have used enumerate to index the loop while counting the states. But either way I loved it. Been almost 12 years that I worked on financial data and I am now eager to get back. Drop me a line in the private mode and maybe we can collaborate.
You are correct. I'm a noob when it comes to using enumerate sometimes, and I even have a video about it!
@@robmulla it’s fun, maybe I will share a colab notebook with deep learning and shat that with you. I hate making presentations. And tutorials.I hate autocorrect. Always changes scientific on engineering terms. Sorry for writing colander instead of colab. I edited now when I noticed it.
Share and shat seem to be the same on iOS. Make life hell.
Thank you. Particularly enjoyed the last big matrix plot you made, very useful. A little curious and as coninuation to what you did, would it be possible to share y scales, both primary and secondary y axis, through out all the plots you made? Noticed you used the option sharex=True, but one can't use e.g. sharey on the secondary y axis as far as i understand. Any quick solution to achieve that? Relevant if one want to easily compare across the States at the same time (my reason for asking).
Hi Medallion. is this visual studio or jupyter notebook application your using?
Thanks for the question George. In this video Im working in a Kaggle notebook. I have a different video on jupyter notebooks that discusses the differences between this and a notebook. Also I have a link to this notebook and code in the video description.
4:14 it should be "pd.set_option('display.max_columns', 500)"
Are you using Jupyter notebook,Sir?
Yes I am! I have a whole video about it.
what's the difference between the filter variable in fred.search() and df.query()?
One is filtering after the data is pulled from FRED. The filter in search I believe is run on the server side.
@@robmulla thank you sir
Very nice, what about comparing state unemployment to national rate? And maybe some forecast?
Thanks. Good ideas. I’ll see what I can do.
Very helpful.
Thank you for watching!
Many Thanks!!
I ran all on Kaggle perfectly till when i tried plotting with ploty [ px.line(uemp_states) ]. I had this value Error [ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().] any idea why and what I should try?
Thanks for the feedback. The notebook on kaggle actually needs to be updated for a few API changes. Someone put a comment on the notebook with the changes they made to get it working. I'll try to update it with those changes.
I updated the notebook and the latest version fixed the issues with the data causing it to not run end-to-end. It looks like they may have removed April 2020 from the unemployment dataset. Not sure why. Good luck.
@@robmulla works well! Thank you and keep the great content coming!
in !pip install fredapi is showing error
ERROR: Could not find a version that satisfies the requirement fredapi (from versions: none)
ERROR: No matching distribution found for fredapi
Thanks for an awesome video! How do you comment multilines in one go?
Thanks for the feedback. I'm not sure what you mean by "in one go" but you can add multiline comments with tripple quotes """. Or if you mean split lines for code you can add a backslash like this \
Hope that helps.
@@robmulla Thank you very much for the information! I finally got answer from Kaggle : ctrl + /
May I know what IDE are you using, please? Thank you.
I’m using a jupyter notebook. Checkout my video tutorial on it I go into a lot of detail.
When I try to install Fred api on kaggle it just doesn’t work , any ideas
This is an awesome video. Would be really cool to see the trend of stock market with rising interest rates.
Thanks Jackson. That's a great idea to look at stocks vs interest rate. I'd need to check if that's available through FRED or maybe I'd need to pull it from somewhere else.
GOAT
GOAT = Guy On API for Time-series?
Hello Rob,
when I run the code: fred.search('S&P')
then I am getting value error. I try many way but can't solve the error.please help
ValueError: Bad Request. The value for variable api_key is not a 32 character alpha-numeric lower-case string.
Hey man Love your tuts , Question .. i have already researched and tried all kinds of codes to Display the full data of columns and rows and nothing works . i'm working on VSC(jupyterNotebook), i get this error ,"AttributeError: 'DataFrame' object has no attribute 'set_option'". im annoyed .
Hi there, when I do fred.search('S&P'), I get error: 'xml.etree.ElementTree.Element' object has no attribute 'getchildren' - how do I solve this?
I had the same one. If anyone could solve it...
I love yout videos
Thanks for watching!
Hi Rob, I was not aware of your video before I extracted FRED data. Note SP500, NASDAQ, DJIA , Copper prices are unreliable in 1980s and until much later. I would appreciate any other data-source
Oh. I didn’t realize that FRED data was unreliable prior to the 1980s. You could maybe try yahoo finance? I’m curious, why are you interested in such old financial data?
@@robmulla For dynamic modeling
hey, i tried running the same codes on google collaboratory and it shows an error after about every 3-4 blocks of code. is there any way i can prevent this- any packages i need to install? or is it just advisable to download Python for DS? also looking for safe websites where i can download it from!
Hey Varun. Sorry to hear you are having problems running it. Did you create your own API key from the website and change the code to take your key? It won’t work unless you do that.