- 29
- 22 247
Phillip in the Cloud
United States
เข้าร่วมเมื่อ 25 ก.ย. 2006
Code and puns, possibly impressions. That is all.
I currently stream mostly about the ibis project. Check it out at docs.ibis-project.org!
GitHub: github.com/cpcloud
I currently stream mostly about the ibis project. Check it out at docs.ibis-project.org!
GitHub: github.com/cpcloud
Caching Ibis Expressions for Lightning Fast Analytics
In this video I show how to use the new-in-5.0 cache method for ibis table expressions, to avoid repeated expensive to compute but cheap to store computations when doing interactive analysis.
Check out the ibis project at ibis-project.org
Check out the ibis project at ibis-project.org
มุมมอง: 388
why create an intermediary/broker in the form of ibis while we can do well with DuckDB straight out?
Hi Philip is there a video for installing ibis ?
Oh man, biggest respect, having to figure out all commonalities, and special cases for different DBs, map everything in code, and maintain the code as different products evolve….can imagine how challenging it is…❤ Amazing work
this was a great video, thanks for creating it!
Goodnight ! I'm creating the convert function in SQL Server, my question is how to define a variable return in 'dtype' I try dt.Any but it doesn't work ex: class Convert(Value): arg: Value[dt.Any, ds.Any] data_type: Value[dt.String, ds.Any] style: Value[dt.Integer, ds.Any] | None = None dtype = dt.Any shape = rlz.shape_like('arg')
You need to import ibis.expr.datatypes as dt!
@@cpcloud Ha thank you! I had made the import as in the convert function the output can vary, so solve it class Convert(Value): arg: Value[dt.Any, ds.Any] data_type: Value[dt.String, ds.Any] style: Value[dt.Integer, ds.Any] | None = None shape = rlz.shape_like('arg') @property def dtype(self) -> dt.DataType: key = self.data_type.value return _from_type_dtype[key.upper()]
Nice. How did you manage to configure ipython to produce tables in that pretty way?
We're using a library called rich for this!
are you using Jupyter Notebook in your terminal? how doe sit work? or is it vscode?
It's just regular IPython in the terminal, no jupyter, no vscode
is this a python repl? what are you running python in the video?
I use IPython and it's my daily driver, I love it!
@@cpcloud as in ipython notebooks?
No, you can pip install ipython. Then run ipython
How does it compare to Polars?
Good question! Perhaps a Polars comparison is in order!
Stay tuned, a blog post about this is on the way!
@@cpcloudawesome, looking forward to it
Great video. Subbed! Curious what you use for your terminal to get the nicr colors?
I'm using Alacritty and starship!
2.8 times slower on execution, but you shouldn't forget load times. and memory usage (but you looked at that already)
What load times are you referring to?
df = t.execute() where pandas loaded the db into ram. or am I misunderstanding, what pandas/ibis did there? around 14:00
Pandas loads the entire dataset into memory. DuckDB operates in a streaming fashion. If you have enough RAM, it might behave similarly with respect to memory as pandas. If you're RAM-limited, then it will work, whereas pandas will simply fail.
well yes, but loading it into ram takes additional time. so pandas is even slower Sure if you execute many operations in pandas the initial overhead for loading data in memory gets less (per operation), but not every script does that many operations.
Hi phillip nice tutorial , it helps a lot if you can make any tutorial reading s3 file and cache it using ibis which also include all cache related config . i have tried to read s3 files and done cache using ffspec but don't know how to using proper method of ibis and duck db way. TIa
Great idea! We'll consider a cloud storage blog post soon on ibis-project.org/posts
@@cpcloud Thanks please let me know once you posted a same which helps a lot Thanks agian
Hi @Phillip is there any way to read large parequt file from s3 and cache it like hold up to last 2 hrs data from those file as a cache or similar approach to achieve cache ? TIA
What REPL are you using to get those completions?
I am using IPython!
Hi, this is a really good package. I have been following your videos but I wasn't able to install `ibis-bigquery` in anaconda so I wanted to nest/implode (reverse of unnest) the data before trying the unnest() feature. I know Polars to implode the data but I am not sure how can I do it in ibis or ibis-polars.
The implode functionality is a method called collect(): ibis-project.org/reference/expression-generic#ibis.expr.types.generic.Value.collect
This video was very useful helping me wrangle a dataset by mixing selectors ->pivot_longer -> pivot_wider . Doing it with pandas would have required me to breakup into multiple dataframes.
Glad you like it!
Unnest() doesn't work when you have a array of structs. Which version of ibis are you using?
I don't remember! But we just released 7.0, give it a try. Unnest works with even more backends now!
my guy didn't even know n_largest n_smallest functions. kinda clear he doesn't know pandas that well and hurts the usefulness of the video's credibility. try doing a little hw! not coming back to this channel unless the dude shows he knows what he is talking about. too many other actually informed people on yt to waste a 25min vid with this dude
Feel free to do a video to correct my mistakes and link it here!
Very useful…, more videos on pyarrow + duckdb + polars will be nice, thank you
More to come!
awesome
Thanks for watching!
Excellent. Thanks for sharing.
Glad you enjoyed it!
reminds me of slick for scala.
Nice, hadn't heard of that. Thanks!
@@cpcloud I forgot to say, excellent video though. Subscribed!
great content! If possible, would you be able to do a demo using MSSQL?
For sure! It'll look pretty similar to other things, which is the whole point!
I'm looking to use Clickhouse as persistent storage and Polars with ConnectorX as an interface. Would like to use the Pandas-style API of Ibis but how can that fit in here?
How about using ibis with the clickhouse backend and forego polars and connectorx altogether?
I have a suggestion. For the line recorded in the video, it is best to move up a little, three or five lines will do. When I pause the video, the progress bar of the player just covers the concerned code.
Great suggestion!
you also learn DuckDB cs Peaks th-cam.com/video/bzess7_pKoc/w-d-xo.html
Where can we find the Suba (sp?) library that you mention at the start of the video?
Where did you post the links to the projects to which you referred in the video? I'm watching this in Firefox on my phone and don't see the links in the video description. Did the live stream have a chat window?
Siuba: github.com/machow/siuba
You mentioned near the start of this video that you are using IPython as your shell.
Yep, IPython is great!
If I recall correctly DuckDB 0.8 moves NULL values to the bottom of a query result.
That sounds right!
What shell do you use to display that "counter" prompt and the tables? It resembles a Jupyter notebook, but in a terminal.
Regular IPython! Nothing super fancy!
I am unsucessfully trying to apply this technique to a BigQuery table. Should it work with BQ?
You should be able to use a bigquery table as part of the join, but it won't work as the primary execution engine that ties everything together: only DuckDB is supported for that. Please open an issue on the ibis issue tracker on GitHub if you're still having trouble!
Hey Phillip, Thanks a lot for taking time to create this video. Very helpful. Look forward to seeing more interesting stuff on ibis. Thanks. Manoj
Thanks for watching!
Does this approach also work using the 'mssql' backend ?
You can certainly use MSSQL with ibis, but the examples are designed to work with duckdb because it's easy to set up and get started with.
@@cpcloud thanks
Thanks for this tutorial! Curious how to display your camera to screencast?
Not sure what you mean!
What ide/text editor setup is this?
Neovim with a bunch of plugins!
This work is similar as a python polars example.
Totally! That's the idea, we're reproducing their example with ibis.
what about duckdb?
Good question! What do you mean?
Impressive results. May want to zoom in on the bottom left of the terminal window next time for ease of watching this on a phone
Thanks for the tip!
Interesting how struct.unpack returns a tuple.. that's not immediately obvious to me! In any case, I understand, 'b' is 8-bit, 'H' is 16, and 'i' is 32-bit.. the upper/lowercase just sets if Unsigned/signed.. however what about 24-bit? For example, many ADCs output 24-bit signed data.. so if I save a file with 24-bit data I've captured, is it possible for struct.unpack to automatically save every 3-bytes as a signed 32-bit integer (sign extending and adding an extra byte automatically)? Also, given struct.unpack requires a byte-array.. is it possible to pass it an array of c_ubytes? I have a can-bus structure that has a data array element. class TPCANMsg (Structure): _fields_ = [ ("ID", c_uint), # 11/29-bit message identifier ("MSGTYPE", TPCANMessageType), # Type of the message ("LEN", c_ubyte), # Data Length Code of the message (0..8) ("DATA", c_ubyte * 8) ] # Data of the message (DATA[0]..DATA[7]) It'd be nice to do something like struct.unpack ('>h', msgval.DATA[0:4]) , but that wouldn't be proper syntax since DATA is not the type that could be sliced.
The struct unpack I am showing here isn't related to the standard library struct module!
Video intro starts at 3:30
Thanks Tim! We'll edit the video!
Edited!
That's a incredible ibis video, ty so much!
Glad you like it!