hey can you tell us how sql indexes works on long text , lets we have text indexes in no-sql(mongo) which works good in long text and statements but how long text and statements can be efficiently searched with sql indexes.
9:08 "5080" A good example of how difficult it is to get good benchmarks. ID=5000 was fetched, but why was fetching 5080 so fast while fetching 7080 was slow again? Because PostgreSQL stores rows in pages, which are 8KB blocks of diskspace. Hussain made rows of 2 integer and three(?) characters so one 8KB block can hold about 1000 rows. When the database fetched the page that 5000 was in, that page was cached by the operating system (not the database) had inadvertently instantly cached 1000 rows around id=5000. 14:04 "The primary key is usually stored with every single index" I have never heard of that behavior. Primary keys are always indexed but as far as I am aware they are never automatically added to every index you create. ht index contains tuple and row information to enable a lookup but not the PK. I get the feeling you're seeing the ID rather quicky because the pages were already in cache from the previous queries. And about the LIKe not using an index; that's a good topic for a separate video: Trigram indexes.
Thanks Vinny this is very valuable! and yes me pulling id 5000 and then 5080 came also quick because the OS cached the page.. Neat how databases work Yeah InnoDB I believe works this way stores the primary key along side every index you create on other columns
@@hnasr It seems you are correct about InnoDB adding the PK to the index... wow...weird design choice. But then my opinion of MySQL has never been very high :-)
What do you mean by page here? also, if the primary key (or more general, a reference to the record) is not stored with the indexed row, for exmaple name column here (which I assume that is stored in another database in a b-tree structure), how do database find the actuall record when I say "Select * from table where name='name';" ? tnx
I was just wasting time in youtube and suddenly your video pop up to my screen. Good information summarized. Thanks man! Take my like and keep uploading videos!!!
Good stuff. I'm a DBA of about 20 years. I remember using PostgreSQL before the SQL interface was added. :D Anywho, you mentioned the primary key being stored with the data. It's actually the opposite. The data is stored or "clustered" with the primary key. It's the only index that exists with the data. All others are index lookups that reference the location of the data. Great explanation.
Actually normally I would never say something about the accent that anyone has, but wow bro your accent is perfect! Thank you for creating so much valuable content on youtube like this and please keep doing it!
**Highlights**: + [00:00:00] **Introduction to database indexing** * What is an index and why it is useful * How indexes are built and stored * Examples of index types: B-tree and LSM tree + [00:03:00] **Querying with and without indexes** * How to use explain analyze to measure query performance * How to compare the execution time and cost of different queries * How to avoid full table scans and use index scans instead + [00:12:08] **Creating an index on a column** * How to create a B-tree index on a name column * How to use the index to speed up queries on the name column * How to avoid going to the heap and use inline queries + [00:16:30] **Querying with expressions and wildcards** * How expressions and wildcards prevent the use of indexes * How to avoid using like with percentage signs * How to use hints to force the use of indexes
Great video! You talked about using a multicolumn index as a way to save time to not have to go to disk. It would be interesting to have a video showing the tradeoffs of this approach. It seems Postgres do not recommend using a multicolumn index, only when really necessary.
Hi, As far as I know, PostgreSQL includes the primary key along with the secondary index. Now, I have a table - tbl_questions that has: 1. id - primary key 2. question_set_id - secondary index I am using the query: EXPLAIN ANALYZE SELECT * FROM tbl_questions WHERE question_set_id = 3 AND id > 50 LIMIT 10; This query is doing an Index scan on question_set_id and then filtering out records where id > 50 Here's the output: Limit (cost=0.14..7.89 rows=1 width=582) (actual time=0.009..0.009 rows=0 loops=1) -> Index Scan using tbl_questions_question_set_id_idx on tbl_questions (cost=0.14..7.89 rows=1 width=582) (actual time=0.008..0.008 rows=0 loops=1) Index Cond: (question_set_id = 3) Filter: (id > 50) Planning Time: 0.073 ms Execution Time: 0.021 ms (6 rows) My question is, if the id is stored along with question_set_id, then why is the condition not like Index Cond: (question_set_id = 3) AND (id > 50) I have tried switching the position for id and question_set_id in the query but still the same result. However, when I created a composite index like below, it was working as expected: 1. id - primary key 2. question_set_id, id- secondary index Here's the query and output: EXPLAIN ANALYZE SELECT * FROM tbl_questions WHERE question_set_id = 5 AND id > 10 LIMIT 10; Index Scan using tbl_questions_question_set_id_idx on tbl_questions (cost=0.14..8.01 rows=1 width=582) (actual time=0.009..0.009 rows=0 loops=1) Index Cond: ((question_set_id = 5) AND (id > 10)) Planning Time: 0.074 ms Execution Time: 0.021 ms (5 rows) It will be very helpful if you can clear this out or let me know if I am doing anything wrong. Thanks
I'd watched this video 6 months ago and understand nothing, now I watch this video and I am working on some kind of project that needs indexing Finally I Understand Thank Hussien But, wait a little bit, how do you design your youtube thumbnail
Hello Hussein! Great video as always. This left me with some questions. How are we supposed to implement a search functionality if "like" is not a good idea? Should we create as much indexes as possible? or should we create indexes on most used fields? Thanks again.
Great question! This is something I didn’t touch upon on the video you can actually create an index based on the LIKE predict. Some databases also support full text search capabilities in an efficient manner. And finally there are databases specialized in text based search
". How are we supposed to implement a search functionality if "like" is not a good idea?" There is nothing inherently wrong with LIKE. Hussain's exaplne uses a BTREE index and that index type cannot search for wildcards at the beginning. Other index types such as Trigram indexes can do that. Fulltext mostly won't help if you are really looking for substrins because they generally don't implement that.Searching is a whole different subject,but genereally speaking PostgreSQL's fulltext with Trigram indexes and a little bit for manual labour is more than sufficient. No need to jump to Lucene and the like unless you are doing very spcific work or at a large scale.
When you have multiple colums in where clause and sometime index will not hit. Looking forwad for a 2nd part of this video explaining best practices when there are multiple columns in where clause then what kind of and order of index should be made. Separate index on each column or combined index and how this will impact on write time? Also if we write like query as 'Zs%' will index hit?
Thank you so much, Hussein! This explanation is so incredible! After all, i'm asking you if you can explain where index is bad to database (sparse tables, how much it can cost to database size, etc), this will be good at all, and i will be gratefull!
You are a star, really appreciate. Just on query sir- - Why slow: select name from employees where id=5000; - Why fast: select id from employees where id=5000; On both cases primary key id is indexed and query scans index using where clause id. Is that because name is not present on the index? Correct me if I am wrong.
" Is that because name is not present on the index? " Yes. If you select a field that is not present in the index the database must fetch the value for that field from the table (the heap) and that takes time. BUT:if you just add the name to the index then the index becomes as large as the table, which can also have negative consequences.
Thank you for the video it was really helpful to watch. I am working with databases and since you already spoke about acid, indexing and pooling another topic I'd be very interested in is views. How and when they are computed their benefits over regular queries and also materialized views which I think is a great Postgre feature.
That was a great video ! Can you explain how exactly the b-tree looks like when a index is created? I mean, what a b-tree node contains? (Key, value, pointer to row id etc ? )
@@hnasr I would really appreciate if you can do one. I have been trying to understand it from here: use-the-index-luke.com/sql/anatomy. Will wait for the video!
Hey Hussein, I hope this comment finds you. I had a question related to Fulltext Search (for MSSQL but I understand you have a preference for PostGres). I have an issue where if there is a search involving multiple CONTAINS calls, any subsequent search on one of the columns takes a really long time to execute. Oddly enough, after around 10 mins or so, it then becomes rapid. This is on a table with around 71k rows so not massively huge. I wondered if it was something to do with indexes but it would be great to see a video on Fulltext. I can send you the query exactly how I have it
there are two types of inverted indexes you can use for fulltext search: gin, gist each one has pros and cons. for gin: it's good for lookups but not so with inserts, deletes or updates. for gist: it's the opposite (good for updates not lookups) if you use gin on a table that changes often it would be slow as it takes time to build/rebuild the index. whenever you do query while index is building, it won't use the index and will do a full table scan. but once it finishes building the index it will use it. so that could be a reason for your problem. using gin for a table that changes so often which causes the index to rebuild each time do you can't use it until in finishes building.
@@mohamedmohamedy3085 interesting, was not aware of gin/gist. Will look into that a bit more. I gave up with that search implementation and decided to stick with the current Azure cog search we had on the project 😂
good info... thanks... very interested in how: - a dattabase decide in what page save a record - how variable size rows are stored (a page can have 1 row... or 10 rows) - what if a record is bigger than the page size? keep up----
Which one is better, having different indexes for different columns or having one index containing multiple columns and can you give examples of in which case which option to go for?
This video is really helpful. I have few questions 1. Can Insert query become Faster in case of indexing?? 2.After creating index if i insert some new rows then indexing will work for this new rows?? Do I need refresh the table's data for that new rows or need to recreate the index??
Inserts on tables with indexes are “slower” than when no indexes exist. Reason being the additional work required to update indexes. That being said it depends on rows inserted, nulls for instance are mostly not indexed so no cost there.
What can be the best way to execute my query faster if I have millions of records in a table and I need to perform select , delete and insert operation one by one in a single API call? Currently I am using postgre (In Python) without indexing and it's taking more than 3 mins to finish API call.
Thanks, that was really clear intro, can I have a suggestion, I know this is not suppose to be a formal education channel and you are reflecting on different topics of backending it's really dull but can you have serieses of same topics in one place, even if you just touched the topic, it will be more beneficial if we could know the different aspects related to a topic.. Again thanks for the intro
Thank you for clarifying concept database indexing. I appreciate your insights, but I'm still a bit puzzled about a few things. While I understand that traditional relational databases have robust implementations of indexing, I'm curious to know why one might opt for search engines, like Elasticsearch, over them. Specifically, how does Elasticsearch indexing differ from that of relational databases? Moreover, are there specific challenges or limitations associated with relational database indexing that Elasticsearch indexing can address more effectively? I'd greatly appreciate any further insights you can provide on this topic. Thank you in advance for your time and assistance.
Great video. What would be the best way to deal with more expressive queries like the example you ran into at the end with 'LIKE'. Perhaps you have a index key that encodes multiple data points, and you want to run regex across each index key to quickly get back a set of id's that satisfy your search. Is there a better method for approaching this type of requirement?
Thanks! You can create an index in postgres with gin extension that allows for expressive queries such as LIKE niallburkley.com/blog/index-columns-for-like-in-postgres/
It depends too much indexes on a table can actually slow down writes because of the need to update all those indexes and structures it becomes an overhead. The trick is to index exactly what we need and put the columns we need to get inline index only scans which are the best. Good question
Would be nice to know if the index helps in case the expression is "like 'Za%'". Intuitively it should be able to the rows starting with Za and take advantage of the index, what do you think?
Great video, thank you! 12:00 wouldn't it be much slower if you searched for something else like '%wr%' because 'zs' results have been cached as a result from the query you ran before the LIKE-query? I mean, the "= 'zs'" query took about 3.2 seconds, the LIKE + wildcard query only 1.x seconds?
It's a great and very helpful video on indexing in general. Please may I get a link if you have uploaded a video on multicolumn indexing? How is done specifically in PostgreSQL? Thank you
so how can we make matching expressions fast ?, and will a document-oriented database face the same problem and search through all documents in a collection to match maybe a name or will it be faster ? (i mean do we have an option to optimize queries like this ?)
What is the best way to use instead of Like query if some similar query needs to be implemented. Use fullTextSearch or elasticSearch or any other things or is there a way?
Knowing that LIKE is slow AF, makes me wonder what kind of dark magic lies beneath the major search engines that are able to serve a result in less than .2 of a second. It's mind-boggling imo.
When creating the index, does this create a transaction and lock down inserts? I would think because the b-tree is being stored in another place that it doesn't have to do this, but if it doesn't lock the table down, as new records are inserted while the b-tree is being built they won't get put into it until later.
tmanley1985 great question! Lots of databases tackle this differently, I believe postgres blocks inserts/updates and deletes during the create index operation but allows reads. In postgres 12 I believe they introduced a new feature to allow writes while create index however that may take longer.. Its fascinating reading about this stuff and see how each database perform in certain situations.. that makes engineers pick and choose which database to select as a result Awesome question
Great video. I have a question. How does the DB also gets the name when just having the ID not "so slowly"? If the name is not on the index, how is the reading from disk done?
The index tells you exactly where to look in the table. That's why it's faster. It's like a letter index, if you know you're looking by the letter C, you would go straight to page 3.
The worst-case scenario is O(n), and it has to scan everything, but I think if the requested record was at the beginning of the table, it should be faster. Is this how it works?
Correct assuming full table scan and if we are lucky the row was at the beginning. I am not sure however that all databases scan top to bottom though. That is why database sometimes prefer seq scan over index scan id it knows it could find the row faster with a seq sequence or if the index scan is gonna be slower because of scattered nature
Get my Fundamentals of Database Engineering udemy course to learn more , link redirects to udemy with coupon applied database.husseinnasser.com
hey can you tell us how sql indexes works on long text , lets we have text indexes in no-sql(mongo) which works good in long text and statements but how long text and statements can be efficiently searched with sql indexes.
Already bought it and enjoying it 😀
9:08 "5080"
A good example of how difficult it is to get good benchmarks. ID=5000 was fetched, but why was fetching 5080 so fast while fetching 7080 was slow again? Because PostgreSQL stores rows in pages, which are 8KB blocks of diskspace. Hussain made rows of 2 integer and three(?) characters so one 8KB block can hold about 1000 rows. When the database fetched the page that 5000 was in, that page was cached by the operating system (not the database) had inadvertently instantly cached 1000 rows around id=5000.
14:04 "The primary key is usually stored with every single index"
I have never heard of that behavior. Primary keys are always indexed but as far as I am aware they are never automatically added to every index you create. ht index contains tuple and row information to enable a lookup but not the PK.
I get the feeling you're seeing the ID rather quicky because the pages were already in cache from the previous queries.
And about the LIKe not using an index; that's a good topic for a separate video: Trigram indexes.
Thanks Vinny this is very valuable! and yes me pulling id 5000 and then 5080 came also quick because the OS cached the page.. Neat how databases work
Yeah InnoDB I believe works this way stores the primary key along side every index you create on other columns
@@hnasr It seems you are correct about InnoDB adding the PK to the index... wow...weird design choice. But then my opinion of MySQL has never been very high :-)
I was about to comment on the same thing explaining why the query with id = 5080 was faster than the one with id = 7080.
also about the page size, I think it's not fixed in every system. for example I just checked on my mac($getconf PAGESIZE), it's 4KB.
What do you mean by page here? also, if the primary key (or more general, a reference to the record) is not stored with the indexed row, for exmaple name column here (which I assume that is stored in another database in a b-tree structure), how do database find the actuall record when I say "Select * from table where name='name';" ?
tnx
I was just wasting time in youtube and suddenly your video pop up to my screen. Good information summarized. Thanks man! Take my like and keep uploading videos!!!
Thank you 😊 glad you enjoyed the content !
I have learned in this video more than I learned in a complete university semester
الله يحفظك ❣
Started today, My 15th video in a row. Thanks a lot man, getting all this knowledge for free is a blessing for us.
nice! thanks for commenting and take some rest and pick up some other time :)
all the best
Good stuff. I'm a DBA of about 20 years. I remember using PostgreSQL before the SQL interface was added. :D Anywho, you mentioned the primary key being stored with the data. It's actually the opposite. The data is stored or "clustered" with the primary key. It's the only index that exists with the data. All others are index lookups that reference the location of the data. Great explanation.
didn't know postgres primary index is clustered! thanks
just finished my first proper postgresql view that takes about 10 seconds and i see hussain nasser upload this video, coincidence? i think not...
Me: Man I'm really interested in (insert subject here). I wonder if there's a video on this.
*Hussein has entered the chat*
Man, I absolutely love your attitude and style of teaching. Deeply grateful for your content... thank you sir!
Actually normally I would never say something about the accent that anyone has, but wow bro your accent is perfect!
Thank you for creating so much valuable content on youtube like this and please keep doing it!
**Highlights**:
+ [00:00:00] **Introduction to database indexing**
* What is an index and why it is useful
* How indexes are built and stored
* Examples of index types: B-tree and LSM tree
+ [00:03:00] **Querying with and without indexes**
* How to use explain analyze to measure query performance
* How to compare the execution time and cost of different queries
* How to avoid full table scans and use index scans instead
+ [00:12:08] **Creating an index on a column**
* How to create a B-tree index on a name column
* How to use the index to speed up queries on the name column
* How to avoid going to the heap and use inline queries
+ [00:16:30] **Querying with expressions and wildcards**
* How expressions and wildcards prevent the use of indexes
* How to avoid using like with percentage signs
* How to use hints to force the use of indexes
You are a gift from God for us backend developers.
Thank you so much! This was clear, concise and very helpful! More postgres tutorials plz!
Great video!
You talked about using a multicolumn index as a way to save time to not have to go to disk. It would be interesting to have a video showing the tradeoffs of this approach. It seems Postgres do not recommend using a multicolumn index, only when really necessary.
You're orders of magnitude better than the TA for my DB course! Thank you very much for the explanation 😊
in this quick video I jumped into Database Flow world!!! really appreciate your work, bro
This was extremely helpful. One of those topics that is never really explained in detail!
Hi,
As far as I know, PostgreSQL includes the primary key along with the secondary index.
Now, I have a table - tbl_questions that has:
1. id - primary key
2. question_set_id - secondary index
I am using the query:
EXPLAIN ANALYZE SELECT * FROM tbl_questions WHERE question_set_id = 3 AND id > 50 LIMIT 10;
This query is doing an Index scan on question_set_id and then filtering out records where id > 50
Here's the output:
Limit (cost=0.14..7.89 rows=1 width=582) (actual time=0.009..0.009 rows=0 loops=1)
-> Index Scan using tbl_questions_question_set_id_idx on tbl_questions (cost=0.14..7.89 rows=1 width=582) (actual time=0.008..0.008 rows=0 loops=1)
Index Cond: (question_set_id = 3)
Filter: (id > 50)
Planning Time: 0.073 ms
Execution Time: 0.021 ms
(6 rows)
My question is, if the id is stored along with question_set_id, then why is the condition not like Index Cond: (question_set_id = 3) AND (id > 50)
I have tried switching the position for id and question_set_id in the query but still the same result.
However, when I created a composite index like below, it was working as expected:
1. id - primary key
2. question_set_id, id- secondary index
Here's the query and output:
EXPLAIN ANALYZE SELECT * FROM tbl_questions WHERE question_set_id = 5 AND id > 10 LIMIT 10;
Index Scan using tbl_questions_question_set_id_idx on tbl_questions (cost=0.14..8.01 rows=1 width=582) (actual time=0.009..0.009 rows=0 loops=1)
Index Cond: ((question_set_id = 5) AND (id > 10))
Planning Time: 0.074 ms
Execution Time: 0.021 ms
(5 rows)
It will be very helpful if you can clear this out or let me know if I am doing anything wrong.
Thanks
That's because only InnoDB engine adds the primary key to every index. I think you are not using InnoDB
Love you my guy, most valuable tech content creator in youtube.
You explaining skills are just excellent.
I'd watched this video 6 months ago and understand nothing, now I watch this video and I am working on some kind of project that needs indexing
Finally I Understand
Thank Hussien
But, wait a little bit, how do you design your youtube thumbnail
Hello Hussein! Great video as always. This left me with some questions. How are we supposed to implement a search functionality if "like" is not a good idea? Should we create as much indexes as possible? or should we create indexes on most used fields? Thanks again.
Great question! This is something I didn’t touch upon on the video you can actually create an index based on the LIKE predict. Some databases also support full text search capabilities in an efficient manner. And finally there are databases specialized in text based search
Would you be able to make a video regarding that?
". How are we supposed to implement a search functionality if "like" is not a good idea?"
There is nothing inherently wrong with LIKE. Hussain's exaplne uses a BTREE index and that index type cannot search for wildcards at the beginning. Other index types such as Trigram indexes can do that.
Fulltext mostly won't help if you are really looking for substrins because they generally don't implement that.Searching is a whole different subject,but genereally speaking PostgreSQL's fulltext with Trigram indexes and a little bit for manual labour is more than sufficient. No need to jump to Lucene and the like unless you are doing very spcific work or at a large scale.
When you have multiple colums in where clause and sometime index will not hit. Looking forwad for a 2nd part of this video explaining best practices when there are multiple columns in where clause then what kind of and order of index should be made. Separate index on each column or combined index and how this will impact on write time?
Also if we write like query as 'Zs%' will index hit?
Thank you so much, Hussein! This explanation is so incredible! After all, i'm asking you if you can explain where index is bad to database (sparse tables, how much it can cost to database size, etc), this will be good at all, and i will be gratefull!
You are a star, really appreciate. Just on query sir-
- Why slow: select name from employees where id=5000;
- Why fast: select id from employees where id=5000;
On both cases primary key id is indexed and query scans index using where clause id. Is that because name is not present on the index? Correct me if I am wrong.
" Is that because name is not present on the index? "
Yes. If you select a field that is not present in the index the database must fetch the value for that field from the table (the heap) and that takes time.
BUT:if you just add the name to the index then the index becomes as large as the table, which can also have negative consequences.
@@vinny142 thanks sir for the answer
Nasser, Your videos are really informative and it helps me picture the topic . More power to you , god bless
appreciate you dear, thanks for your comment!
Thank you for the video it was really helpful to watch. I am working with databases and since you already spoke about acid, indexing and pooling another topic I'd be very interested in is views. How and when they are computed their benefits over regular queries and also materialized views which I think is a great Postgre feature.
thank you for this! I love these videos that actually show how theory works in a concrete example!
That was a great video ! Can you explain how exactly the b-tree looks like when a index is created? I mean, what a b-tree node contains? (Key, value, pointer to row id etc ? )
Good idea for a video ☝️
@@hnasr I would really appreciate if you can do one. I have been trying to understand it from here: use-the-index-luke.com/sql/anatomy. Will wait for the video!
This was SO GOOD, congratulations, highest quality class
Thank you so much! very helpful :)
this channel deserve Million of subscriber
Hey Hussein, I hope this comment finds you. I had a question related to Fulltext Search (for MSSQL but I understand you have a preference for PostGres). I have an issue where if there is a search involving multiple CONTAINS calls, any subsequent search on one of the columns takes a really long time to execute. Oddly enough, after around 10 mins or so, it then becomes rapid. This is on a table with around 71k rows so not massively huge.
I wondered if it was something to do with indexes but it would be great to see a video on Fulltext. I can send you the query exactly how I have it
there are two types of inverted indexes you can use for fulltext search: gin, gist
each one has pros and cons.
for gin: it's good for lookups but not so with inserts, deletes or updates.
for gist: it's the opposite (good for updates not lookups)
if you use gin on a table that changes often it would be slow as it takes time to build/rebuild the index.
whenever you do query while index is building, it won't use the index and will do a full table scan.
but once it finishes building the index it will use it.
so that could be a reason for your problem.
using gin for a table that changes so often which causes the index to rebuild each time do you can't use it until in finishes building.
@@mohamedmohamedy3085 interesting, was not aware of gin/gist. Will look into that a bit more. I gave up with that search implementation and decided to stick with the current Azure cog search we had on the project 😂
One more important thing, if you change rows a lot or add rows etc…, index is a No go.
Waiting for the next indexing video.
man your videos are great. simple to exactly to the point.
good info... thanks...
very interested in how:
- a dattabase decide in what page save a record
- how variable size rows are stored (a page can have 1 row... or 10 rows)
- what if a record is bigger than the page size?
keep up----
Which one is better, having different indexes for different columns or having one index containing multiple columns and can you give examples of in which case which option to go for?
Thanks for indexing make simple for a layman.... It's complicated but this video made it baby job
Thanks for this video. Please also make a video on everything about SQL Query optimisation. I really need it soon, thanks!
Sure thing!
This video is really helpful. I have few questions
1. Can Insert query become Faster in case of indexing??
2.After creating index if i insert some new rows then indexing will work for this new rows?? Do I need refresh the table's data for that new rows or need to recreate the index??
Inserts on tables with indexes are “slower” than when no indexes exist. Reason being the additional work required to update indexes.
That being said it depends on rows inserted, nulls for instance are mostly not indexed so no cost there.
What can be the best way to execute my query faster if I have millions of records in a table and I need to perform select , delete and insert operation one by one in a single API call?
Currently I am using postgre (In Python) without indexing and it's taking more than 3 mins to finish API call.
wow, exactly what i am looking for! awesome, Hussein.
Amazing video, glad to watch this and understand at first glance. I am surely gonna watch more videos to enhance my technical skill.
Thanks, that was really clear intro, can I have a suggestion, I know this is not suppose to be a formal education channel and you are reflecting on different topics of backending it's really dull but can you have serieses of same topics in one place, even if you just touched the topic, it will be more beneficial if we could know the different aspects related to a topic..
Again thanks for the intro
Clearly explained, your voice resembles Harsha Bhogle in 1.5x speed.
This is awesome I'm looking to getting better with PostgreSQL. I am wondering if you can do RLS policy with postgres.
Yes, you can read about that in the manual: www.postgresql.org/docs/13/ddl-rowsecurity.html
Love this video. Really fascinating learning about the intuition behind indexes!
Thank you for clarifying concept database indexing.
I appreciate your insights, but I'm still a bit puzzled about a few things.
While I understand that traditional relational databases have robust implementations of indexing, I'm curious to know why one might opt for search engines, like Elasticsearch, over them. Specifically, how does Elasticsearch indexing differ from that of relational databases?
Moreover, are there specific challenges or limitations associated with relational database indexing that Elasticsearch indexing can address more effectively?
I'd greatly appreciate any further insights you can provide on this topic.
Thank you in advance for your time and assistance.
Please create a video on sql joins time complexity analysis, as you have done for index scan, index only and table scan
Informative with crystal clear explanation. Thank you.
Like %za% is slow,
however I feel,
Like za% would benefit from index
Superb simple explanation ❤❤
The best channel on youtube!
Great video.
What would be the best way to deal with more expressive queries like the example you ran into at the end with 'LIKE'. Perhaps you have a index key that encodes multiple data points, and you want to run regex across each index key to quickly get back a set of id's that satisfy your search. Is there a better method for approaching this type of requirement?
Thanks! You can create an index in postgres with gin extension that allows for expressive queries such as LIKE
niallburkley.com/blog/index-columns-for-like-in-postgres/
@@hnasr Thanks for the info. Great stuff!
select * from employee where name = "%zs%" won't use the index whereas select * from employee where name = "zs%" will use it.
Thanks for cool video.
i have a question. what is the effect if add too much indexing to a table?
It depends too much indexes on a table can actually slow down writes because of the need to update all those indexes and structures it becomes an overhead.
The trick is to index exactly what we need and put the columns we need to get inline index only scans which are the best.
Good question
Would be nice to know if the index helps in case the expression is "like 'Za%'". Intuitively it should be able to the rows starting with Za and take advantage of the index, what do you think?
thats a bloody good video mate. Whats the ram speed youre running there?
Great video, thank you!
12:00 wouldn't it be much slower if you searched for something else like '%wr%' because 'zs' results have been cached as a result from the query you ran before the LIKE-query? I mean, the "= 'zs'" query took about 3.2 seconds, the LIKE + wildcard query only 1.x seconds?
Probably because of caching. EXPLAIN ANALYZE will run the SELECT query and tell what is going on.
perfect explanation! thanks Hussein...
16:34 quick video
Thanks. It is a really amazing lesson I have learnt from this discussion.
thanks. very nice one. However, could you make another one on table partitioning?
Yes! I am planning to. Practical video on partitioning
@@hnasr That's going to be a long one :-)
this was a nice demonstration of indexing, thanks!
So interesting! Could u do one explaining how spatial indexes work on geographic data?
Thank you Hussein for this video, was waiting for this one.
My pleasure
It's a great and very helpful video on indexing in general. Please may I get a link if you have uploaded a video on multicolumn indexing? How is done specifically in PostgreSQL? Thank you
Really great video. Wasnt on nonclustered vs clustered but really great explanation nevertheless
Amazing video, thanx Hussein
Really good video, well explained and just the right degree of details that I was looking for!
Great video and teaching skills. Inspired me to buy your udemy course on databases. Can't wait to learn more.
Thanks for your time! Great intro to indexes
Really great job bro, thanks for all this information.
Thanks you so much for such a details explanation
so how can we make matching expressions fast ?, and will a document-oriented database face the same problem and search through all documents in a collection to match maybe a name or will it be faster ? (i mean do we have an option to optimize queries like this ?)
Awesome explanation
What is the best way to use instead of Like query if some similar query needs to be implemented.
Use fullTextSearch or elasticSearch or any other things or is there a way?
Knowing that LIKE is slow AF, makes me wonder what kind of dark magic lies beneath the major search engines that are able to serve a result in less than .2 of a second. It's mind-boggling imo.
Really good video, excellent explanation
My university teacher told me that he/she knows nothing. So he/she closed the door and play your videos in projector
😮 really
don’t you know your teacher is he or she?
@@harshu2651 no I don't know
Thank you for this informative video❤. You're worth subscribing❤
You are an amazing teacher. Thank you!
When creating the index, does this create a transaction and lock down inserts? I would think because the b-tree is being stored in another place that it doesn't have to do this, but if it doesn't lock the table down, as new records are inserted while the b-tree is being built they won't get put into it until later.
tmanley1985 great question! Lots of databases tackle this differently, I believe postgres blocks inserts/updates and deletes during the create index operation but allows reads. In postgres 12 I believe they introduced a new feature to allow writes while create index however that may take longer..
Its fascinating reading about this stuff and see how each database perform in certain situations.. that makes engineers pick and choose which database to select as a result
Awesome question
By default PostgreSQL will use a read-only lock during CREATE INDEX. But there is a CONCURRENTLY option (add
7:15 For science... Epic
Video starts at 2:30
Great video. I have a question. How does the DB also gets the name when just having the ID not "so slowly"? If the name is not on the index, how is the reading from disk done?
The index tells you exactly where to look in the table. That's why it's faster. It's like a letter index, if you know you're looking by the letter C, you would go straight to page 3.
Good content. Im was doing that like search for a long time though xD. But what if I throw index on every field of the database?
Very helpful. Thank You.
Subscribed to the channel.
Great explanation. Thanks
it is worth noting that mysql is using index on `LIKE %%` query
That was a really good explanation!
The worst-case scenario is O(n), and it has to scan everything, but I think if the requested record was at the beginning of the table, it should be faster. Is this how it works?
Correct assuming full table scan and if we are lucky the row was at the beginning. I am not sure however that all databases scan top to bottom though.
That is why database sometimes prefer seq scan over index scan id it knows it could find the row faster with a seq sequence or if the index scan is gonna be slower because of scattered nature
so index don't work all the time 😐 ... sad ...but nice video !! 👍👍
Video is good but I was also expecting clustered index and non clustered index
clearly explained thank you man
This was simply wow 👏🏻
Explain explain plan and optimisation of queries...
Excellent video, thanks!
@Hussein Nasser Can you also discuss multi-column index? Thanks!
Well done! Congrats 👏
thanks!! you helped me so much!
Glad I could help!