At first when you said use both, I was like that's a crazy idea then thought about it more and it's actually a good idea especially during the exploratory stage or with a side project.
right? I think most would dismiss it right off the bat, but concrete real world data to support your ultimate choice seems like it would help everyone sleep better at night knowing the right decision was made
Oracle had Flashback as it's Time Travel functionality for ages. So it isn't a groundbreakingly new feature. But it is nice to see other DBs implementing something similar.
When making your selection, pay close attention to which features you want to use. Neither of them have 100% feature parity of Standard Postgres. YugabyteDB tends to have more feature parity. CockroachDB, for example, does not support triggers.
great point! yeah it seems like YugabyteDB is the leader in terms of parity with Postgres features. I didn't realize CockroachDB doesn't support triggers!
@@codetothemoon To be fair, I should mention that during the AWS Summit Berlin 2024, they explained to me that there are good reasons why CockroachDB doesn’t support triggers. Triggers don’t really perform or scale well in large distributed systems. However, they do plan to support this feature in the future. That being said, I personally wouldn’t put too much business logic in the database anyway. It’s often better to handle it in the service layer, where you can use asynchronous message queues, which scale much better.
Would you please do a Video on ORMs? I'm currently trying to use Ormlite but now I'm wondering if it's actually usable at the moment and should switch to SeaORM
You want to try Diesel ORM first. I went through a comprehensive evaluation of all Rust ORM's, and eventually settled for Diesel because of its outstanding performance, auto-migration feature, and massive testing performance thanks to its support for parallel integration tests. When you see your CI time dropping tenfold, you know you have clear winner. I drafted an end to end example that walks you through the entire Diesel adoption process including integration testing, which is currently under review in PR #4169 (Added custom array example with documentation) in the Diesel Github repo so if Diesel interests your, that PR and linked Documentation is a good starting point. All other ORM's and Postgres crates I have tested were falling short, some more than others, but by my experience, good feature set, good performance, and great testing is what I am looking for as I had so many DB schematas to migrate and then stuff must work,.
I've been looking into Postgres with Citus as a way to get scalable Postgres, even in self-hosted environments. Might be a good video idea, if you have the fortitude to set it up!
@@codetothemoon Citus is now owned by Microsoft who seems to be increasing investment into development of Citus and marketing of the hosted cloud Citus service.
For me, at least, Neon makes the entire process of spinning up a database and using it extremely simple. I also get to check my tables and optimize them as needed from their dashboard. I haven't used it much beyond that though
For basic queries and simple DB designs, SQLx create works quite nice. But if you need performant or more complex queries, then I would look else where. There are issues with enums, flattening, prefixes, types serialization, performance ... not to mention each new version brakes code and CLI (expected, but annoying).
thanks for the tip! do this issues occur only when leveraging SQLx's automatic deserialization, or will they also occur when just running a hardcoded query and sifting through the `PgRow` results?
SQLx is a nightmare to work with, and so is SeaORM that, I believe, is build atop of SQLx. They have multiple issues open for these things you mentioned, already for years, and somehow cannot fix it. I don't know, but SQLx failed to pass the first round during my initial testing. The biggest problem I would add to the list is that they do not support custom array types in Postgres, which are around forever. I mean, they try, they really do, but somehow SQLx and SeaORM are just not up to common production standards and that isn't great either.
@@codetothemoon SQLx uses serde and few traits for serialization and deserialization, same as everything in Rust. So you can get around some of the limitation with implementing them yourself, it just pain to do so (you will use PgRow here, and probably custom macro). You can get around flattening and prefixes with this. But for enums, types and performance, those things are embedded into SQLx, so you would need to create fork that suits your needs, and keep it updated.
Topical! I'm trialling Yugabyte for a side-project which I leant towards because of its slightly more openness and the neat tablespace location tricks. I'd like to use sqlx but I'm struggling to use the cluster-aware yugabyte drivers with it. Always-on SPA's called from serverless functions backed by such global databases should be within reach soon with this stack.
hey bro I am very noob at Rust and also with strong typed languages, can you make a series of videos for this kind of guys that come from a javascript background and want to start coding on Rust? thanks for the video!
Isn't the usage of the raw functions `sqlx::query/sqlx::query_as` discouraged? The primary usage pattern of `sqlx` is via `sqlx::query!()/sqlx::query_as!()` macros that validate SQL at compile time, and they also make sure all types match. Do I understand that you use raw functions here because macros just don't support these next-gen SQL DBs? UPD, aha, you used the macro syntax for INSERT, but didn't do that for SELECT. Anyway, I thought macro syntax isn't supported with the next-gen SQL DBs
I believe the decision to use the macros or functions is completely up to the developer. The former gets the compile time validation, the latter does not. Different situations and use cases might favor one over the other. But both can be used with these next-gen databases 😎
the compile time type checking is opt-in - you don't have to use it. to build queries without it, just use sqlx::query or sqlx::query_as (the non-macro versions)
@@codetothemoon True, but I use rust to have compile time checks which either require a DB connection during build or schema structure like with diesel. I would prefer writing SQL queries instead of diesel queries, though. Maybe in the future things will get better. I like your content by the way :) Thank you for your work.
good point! In this video I was comparing them more in the context of using the cloud services, but the details of the license will definitely come into play if you are setting up your own cluster!
it's definitely possible, as the audio and video are recorded separately and synced later. but I didn't notice this myself in editing. thanks for pointing it out, I'll have another look!
Thanks for a great video! I have an idea on to be able to use query_as!() : th-cam.com/video/QdGiOMInegM/w-d-xo.html i think it would probably be because you store a DateTime in your struct and your db seems to store NaiveTime right? so since you already have the tz_offset, set entry.dt to naivedatetime (that you know is utc) and store the tx_offset, or in postgres i would have the field as a timezone aware column.
Excellent video, dude. I'm excited to try both of these out!
thanks so much!!! your videos are a continuous source of inspiration for me...
Hey dreamsofcode, will you do a similar video of using Go web apps with Cockroach and Yugabyte as well?
Being able to use SQLx is AWESOME!!
Agree!
What an excellent video! We’re delighted to see that you had such a great experience with CockroachDB.
thank you! nice work creating a killer product!
This is really nice, you can start with vanilla postgres and scale it later if needed with these!
exactly! pretty powerful capability.
At first when you said use both, I was like that's a crazy idea then thought about it more and it's actually a good idea especially during the exploratory stage or with a side project.
right? I think most would dismiss it right off the bat, but concrete real world data to support your ultimate choice seems like it would help everyone sleep better at night knowing the right decision was made
Great breakdown!
thanks Bogdan!!
Oracle had Flashback as it's Time Travel functionality for ages. So it isn't a groundbreakingly new feature. But it is nice to see other DBs implementing something similar.
oh interesting I didn't know about this, thanks for pointing it out!
the way you present information is just perfect! i love learning from your videos and they're easy to follow
thank you so much for the kind words!
When making your selection, pay close attention to which features you want to use. Neither of them have 100% feature parity of Standard Postgres. YugabyteDB tends to have more feature parity. CockroachDB, for example, does not support triggers.
great point! yeah it seems like YugabyteDB is the leader in terms of parity with Postgres features. I didn't realize CockroachDB doesn't support triggers!
@@codetothemoon To be fair, I should mention that during the AWS Summit Berlin 2024, they explained to me that there are good reasons why CockroachDB doesn’t support triggers. Triggers don’t really perform or scale well in large distributed systems. However, they do plan to support this feature in the future.
That being said, I personally wouldn’t put too much business logic in the database anyway. It’s often better to handle it in the service layer, where you can use asynchronous message queues, which scale much better.
Thanks for this video. Would love to see a review on Neon DB as well!
thanks for watching! I'd like to check out Neon as well!
I like a comparison between these databases and the mariaDB cluster at my old job.
nice, glad you liked it!!
Very good video! SurrealDB is on my radar for some time and I would like to make real project using it.
thank you, glad you liked it! SurrealDB seems great so far in my brief usage. Definitely worth keeping an eye on.
CrockroachDB is not open source, so if all else is equal, id pick the FOSS one
if all else is equal i'd agree, but I think for most projects other differences between these two projects may come into play as well
Completely. Would even rather use a FOSS db that had less features than anything closed.
Would you please do a Video on ORMs? I'm currently trying to use Ormlite but now I'm wondering if it's actually usable at the moment and should switch to SeaORM
Thanks for the suggestion, I will put this on my video todo list! I'd love to do a deep dive on SeaORM at some point.
You want to try Diesel ORM first. I went through a comprehensive evaluation of all Rust ORM's, and eventually settled for Diesel because of its outstanding performance, auto-migration feature, and massive testing performance thanks to its support for parallel integration tests. When you see your CI time dropping tenfold, you know you have clear winner. I drafted an end to end example that walks you through the entire Diesel adoption process including integration testing, which is currently under review in PR #4169 (Added custom array example with documentation) in the Diesel Github repo so if Diesel interests your, that PR and linked Documentation is a good starting point.
All other ORM's and Postgres crates I have tested were falling short, some more than others, but by my experience, good feature set, good performance, and great testing is what I am looking for as I had so many DB schematas to migrate and then stuff must work,.
I've been looking into Postgres with Citus as a way to get scalable Postgres, even in self-hosted environments. Might be a good video idea, if you have the fortitude to set it up!
hadn't actually heard of Citus, thanks for putting it on my radar!
@@codetothemoon Citus is now owned by Microsoft who seems to be increasing investment into development of Citus and marketing of the hosted cloud Citus service.
TIL I learned, I don't need to load the UUID extension explicitly in recent versions of PG :D
Glad you learned that, but certainly hoping that wasn't your only takeaway! 😎
Whats the app for the sql schema drawings
eraser.io !! Great tool
neon is an other postgres db that i am rooting for
it looks interesting, I've been curious about it!
For me, at least, Neon makes the entire process of spinning up a database and using it extremely simple. I also get to check my tables and optimize them as needed from their dashboard.
I haven't used it much beyond that though
For basic queries and simple DB designs, SQLx create works quite nice. But if you need performant or more complex queries, then I would look else where.
There are issues with enums, flattening, prefixes, types serialization, performance ... not to mention each new version brakes code and CLI (expected, but annoying).
thanks for the tip! do this issues occur only when leveraging SQLx's automatic deserialization, or will they also occur when just running a hardcoded query and sifting through the `PgRow` results?
SQLx is a nightmare to work with, and so is SeaORM that, I believe, is build atop of SQLx. They have multiple issues open for these things you mentioned, already for years, and somehow cannot fix it. I don't know, but SQLx failed to pass the first round during my initial testing. The biggest problem I would add to the list is that they do not support custom array types in Postgres, which are around forever. I mean, they try, they really do, but somehow SQLx and SeaORM are just not up to common production standards and that isn't great either.
@@marvin_hansenbut with all that said you should help us and list some alternatives that are better and will work reliably 99% of the time.
So CLI is actually broken? Is that why my migration isn't creating any tables...
@@codetothemoon SQLx uses serde and few traits for serialization and deserialization, same as everything in Rust. So you can get around some of the limitation with implementing them yourself, it just pain to do so (you will use PgRow here, and probably custom macro). You can get around flattening and prefixes with this. But for enums, types and performance, those things are embedded into SQLx, so you would need to create fork that suits your needs, and keep it updated.
this was a fun watch
thanks, really happy you liked it!
Topical! I'm trialling Yugabyte for a side-project which I leant towards because of its slightly more openness and the neat tablespace location tricks. I'd like to use sqlx but I'm struggling to use the cluster-aware yugabyte drivers with it. Always-on SPA's called from serverless functions backed by such global databases should be within reach soon with this stack.
hey bro I am very noob at Rust and also with strong typed languages, can you make a series of videos for this kind of guys that come from a javascript background and want to start coding on Rust? thanks for the video!
Great idea! I’ve put it on my video idea list!
Isn't the usage of the raw functions `sqlx::query/sqlx::query_as` discouraged? The primary usage pattern of `sqlx` is via `sqlx::query!()/sqlx::query_as!()` macros that validate SQL at compile time, and they also make sure all types match. Do I understand that you use raw functions here because macros just don't support these next-gen SQL DBs?
UPD, aha, you used the macro syntax for INSERT, but didn't do that for SELECT. Anyway, I thought macro syntax isn't supported with the next-gen SQL DBs
I believe the decision to use the macros or functions is completely up to the developer. The former gets the compile time validation, the latter does not. Different situations and use cases might favor one over the other. But both can be used with these next-gen databases 😎
Makes sense, since you mentionned they are Postgres-compatible, there should be no problem for sqlx to handle them. Pretty neat!
surrealdb gonna win 🎉🎉🎉
maybe! really looking forward to their hosted offering
@@codetothemoon yeah same here, you can also look in their public beta of version 2.0 which they release today
They need to work on their golang api
@@rhysmuirIt’s FOSS I’m sure they’d love to receive PRs for it!
Cockroach 🪳 already won brother, they are Deno - Surreal is Bun.
Neon is also interesting
agree, looking forward to checking it out!
what about migrations with SQLx? I like idea, but don't like to write migrations)
i actually cover this in the video! totally optional though, you can use SQLx without using its migrations
I dislike the need to have a running DB for sqlx to validate queries. I'll stick to diesel (which has better performance too).
the compile time type checking is opt-in - you don't have to use it. to build queries without it, just use sqlx::query or sqlx::query_as (the non-macro versions)
@@codetothemoon True, but I use rust to have compile time checks which either require a DB connection during build or schema structure like with diesel. I would prefer writing SQL queries instead of diesel queries, though. Maybe in the future things will get better.
I like your content by the way :) Thank you for your work.
Hopefully surrealdb gets the pricing right.
agree!
Yugabyte has a better license
good point! In this video I was comparing them more in the context of using the cloud services, but the details of the license will definitely come into play if you are setting up your own cluster!
onlyfans model be watching bros videos
Yes the models appear to really love SQL 🤣
@@codetothemoon well, you could remove them and report them?
Either he has the secret sauce or the bots are swarming
I feel like intro is missing. Jumping straight to the point is good though, but not even telling what the viedo is about is a bit too much.
I personally despise intros when I'm watching TH-cam videos, so I like to get right to the point 😎
unbelievable idea!
thank you!
is it just me or does the audio and video seem just slightly out of sync?
it's definitely possible, as the audio and video are recorded separately and synced later. but I didn't notice this myself in editing. thanks for pointing it out, I'll have another look!
Thanks for a great video!
I have an idea on to be able to use query_as!() : th-cam.com/video/QdGiOMInegM/w-d-xo.html
i think it would probably be because you store a DateTime in your struct and your db seems to store NaiveTime right?
so since you already have the tz_offset, set entry.dt to naivedatetime (that you know is utc) and store the tx_offset,
or in postgres i would have the field as a timezone aware column.