PostgresOpen 2019 Optimizing Query Performance

แชร์
ฝัง

ความคิดเห็น • 9

  • @karianpour
    @karianpour 4 ปีที่แล้ว +6

    You still have more place for performance, I suggest the following index :
    Create Index idx_org_closed_lastitem_id on conversations (organization_uid, last_item_at desc, id asc)
    where deleted_at = ‘epoch‘ and closed = false;
    Explanation :
    - As you have "deleted_at = ‘epoch ‘ and closed = false" criteria always in your query, it is better to put them in the "where" clause of the index, that makes the index smaller, which cause faster discovery of the records.
    - The "order by" fields direction should match the "index" fields, so I made "id asc" to match it, or you can change the query to match "id desc"
    - Actually I did not get the point of "deleted_at = ‘epoch‘" criteria, but you have the idea how to implement it.
    - about "location_id = ANY('{1}'), if you make sure that you search only for one value, you can also add this field to the index columns just after "organization_uid" field.
    Further more, I hope that for infinite scroll you, to fetch the next page, you do not use "Offset" clause, there is better way to do it.
    Also, as you update conversation_read_statuses on message read, this table have a lot of update, and you know the postgres does not update a tuple, rather it create a new one, and latter the VACUUM process will rip them out, and there is some concerns about this if postgres cannot VACUUM as fast as you update the table.
    And in the end, I wonder how it took you so long to figure that out!

    • @RajeshKumar-fc9vk
      @RajeshKumar-fc9vk 6 หลายเดือนก่อน

      Sir ...I have doubt. How to explain analyse dynamic queries

  • @bobslave7063
    @bobslave7063 4 ปีที่แล้ว

    Thank you very much. It was very useful!

  • @565paolo
    @565paolo ปีที่แล้ว

    When it gets so late at the fact that he could have simply run EXPLAIN ANALYZE and see what was preventing the query from running faster this talk starts to feel like a troll. How could you not Google anything during this time and realize the existence of it?

  • @serg472
    @serg472 3 ปีที่แล้ว +14

    That was hard to listen to, I only felt sorry for the company that apparently keeps a whole team of such incompetent people on a payroll. It was not as much that you don't know how to optimize queries, but your general approach to solving a problem is very unprofessional. To fix something first you need to replicate a problem and the environment, then you need to narrow it down (in this case find exactly what part of the query is slow), then you need to understand what is causing it and why, then you need to fix it, then you need to test it on realistic data, and only then deploy. You were just blindly changing random stuff and deploying without measuring or testing and not understanding what is the problem that you are fixing in the first place. You don't need to wait 3 days for your customers to tell you that it went from 6 to 3 minutes, you must know all this yourself before deploying.

    • @fortunatmutunda4194
      @fortunatmutunda4194 3 ปีที่แล้ว +1

      I guess that's what learning is for.

    • @roffel6876
      @roffel6876 3 ปีที่แล้ว +2

      @@fortunatmutunda4194 True, but there are resources out there and people who know their business. They could probably have contracted someone to come in and fix all of their stuff in 3 hours. And then there is that wonderful use-the-index-luke website. People can study that for a day or two and know all they need to know.

    • @jewie1011
      @jewie1011 3 ปีที่แล้ว +1

      I agree. They should've just hired a professional.

    • @chungwu8329
      @chungwu8329 ปีที่แล้ว +1

      I was waiting for the speaker running EXPLAIN ANALYSE at some point...