I'm afraid not, I got a little side-tracked from my video tutorials project but I have for example this one planned since a long time back: bagder.github.io/libcurl-video-tutorials/to-multi/ (but I don't know when I'll get around making that one) Otherwise maybe reading up in the Everything curl book is a decent start? everything.curl.dev/libcurl/drive/multi
After working with memory safe languages with automatic garbage collection for a long time, this is giving me chills. How does curl decided about each chunk size? In case of a simple text response (e.g. HTML).
curl has a certain size of the receive buffer it uses internally (and that size can be changed with the API) and that buffer is used when libcurl receives data off the network, and as soon as it has gotten data returned there (any amount) it will call the callback. So there's no added latency caused by libcurl. In a high speed network situation, the read buffer will be full in every callback.
Thank you for the tutorial! I have a question. It is not strickly related to CURL but the answer can be usefull to others as well. Following you tutorials, I could receive data from a JSON API. I receive a lot of text, good but it is only a text on display or in memory. I am specificly interested in certain values in that text and I would like to put them into variables, intigers in my case. The reveived text looks like this: {"timestamp": 1576700596, "position": {"longitude": "-103.8977", "latitude": "46.5570"}, "message": "success"} How can I deal with this and have the values in usable variable forms? Thank you for your help!
I'm glad you've managed to get your data downloaded. Yes, that step "only" gets it put into a local buffer and you then need to do something useful with the data. In your case, you have JSON so you want a JSON parser library that you can feed this data and then extract specific JSON properties. I have plans for a separate episode where I show how to do something like that; ie pass the downloded data into another separate library that can handle the content. libcurl just delivers the "raw" data, it doesn't do anything with it. We need to add that logic in our applications.
@@DanielStenberg Thanks for this very knowledgeable video tutorial. That's the good idea Sir to make a " ie pass the downloded data into another separate library that can handle the content" . Can please make that episode also, that will be very helpful. Please.
Nice basics of libcurl, can you make a tutorial about web scraping using libcurl? With examples of web scraping Amazon website such details of price, pics and etc?
While I think that could be fun, I don't see myself doing that kind of a presentation or video any time soon. I'm simply already stocked up with work on more on-target curl related stuff.
You beautiful man, saved my grade tonight...
Glad I could help!
This is so amazing. Thank, Daniel!
These are great, Daniel!
Thanks! I'll try to get back and add more videos in this series soon...
Do you have a video about posting data using libcurl please?
How about this? th-cam.com/video/9KqnXsSxqGA/w-d-xo.html
It was great! Do you have any videos for curl multi perform?
I'm afraid not, I got a little side-tracked from my video tutorials project but I have for example this one planned since a long time back: bagder.github.io/libcurl-video-tutorials/to-multi/ (but I don't know when I'll get around making that one)
Otherwise maybe reading up in the Everything curl book is a decent start? everything.curl.dev/libcurl/drive/multi
After working with memory safe languages with automatic garbage collection for a long time, this is giving me chills. How does curl decided about each chunk size? In case of a simple text response (e.g. HTML).
curl has a certain size of the receive buffer it uses internally (and that size can be changed with the API) and that buffer is used when libcurl receives data off the network, and as soon as it has gotten data returned there (any amount) it will call the callback. So there's no added latency caused by libcurl. In a high speed network situation, the read buffer will be full in every callback.
Thank you for the tutorial! I have a question. It is not strickly related to CURL but the answer can be usefull to others as well.
Following you tutorials, I could receive data from a JSON API. I receive a lot of text, good but it is only a text on display or in memory. I am specificly interested in certain values in that text and I would like to put them into variables, intigers in my case. The reveived text looks like this: {"timestamp": 1576700596, "position": {"longitude": "-103.8977", "latitude": "46.5570"}, "message": "success"}
How can I deal with this and have the values in usable variable forms?
Thank you for your help!
I ment double insted of integer. I actually have an idea how to do this but it seems a bit complicated.
I'm glad you've managed to get your data downloaded. Yes, that step "only" gets it put into a local buffer and you then need to do something useful with the data. In your case, you have JSON so you want a JSON parser library that you can feed this data and then extract specific JSON properties.
I have plans for a separate episode where I show how to do something like that; ie pass the downloded data into another separate library that can handle the content. libcurl just delivers the "raw" data, it doesn't do anything with it. We need to add that logic in our applications.
@@DanielStenberg Thanks for this very knowledgeable video tutorial. That's the good idea Sir to make a " ie pass the downloded data into another separate library that can handle the content" . Can please make that episode also, that will be very helpful. Please.
Nice basics of libcurl, can you make a tutorial about web scraping using libcurl? With examples of web scraping Amazon website such details of price, pics and etc?
While I think that could be fun, I don't see myself doing that kind of a presentation or video any time soon. I'm simply already stocked up with work on more on-target curl related stuff.
@@DanielStenberg I’m just so glad you haven’t recommened of using python for web scraping lol 😂
What it means if we recieve 0 bytes as callback
That would be a good sign that there's no more data coming from that transfer.