DP-203: 45 - Azure Synapse Dedicated SQL Pool - extra features

แชร์
ฝัง
  • เผยแพร่เมื่อ 21 ต.ค. 2024

ความคิดเห็น • 22

  • @ryleyalexander8097
    @ryleyalexander8097 26 วันที่ผ่านมา +1

    I would like to say that watching your videos has really advanced my skills and knowledge as a business intelligence analyst. I'm still a junior and only started working 2 years ago and aim to be a data engineer so my company which is a bank wants me to get the dp203. And when I first started studying, I was shocked by how simple you make the content.
    Something that would have previously been greek to me has now helped me to have alot of technical knowledge to further understand the etl process and explaining code.
    Have my exam on the 7th of October, will let you know how it goes! :D
    and as promised from my previous comment, I've subscribed and become a member!

    • @TybulOnAzure
      @TybulOnAzure  26 วันที่ผ่านมา +1

      I'm super happy that you find my course useful.
      About the exam - don't worry about it as you'll pass it with flying colors!
      And thanks for becoming a member - I really appreciate it.

    • @ryleyalexander8097
      @ryleyalexander8097 15 วันที่ผ่านมา +1

      @@TybulOnAzure thanks Tybul!
      I passed today with a score of 753!

    • @TybulOnAzure
      @TybulOnAzure  15 วันที่ผ่านมา +1

      @ryleyalexander8097 congratulations 🎉

  • @jacklarrytairo5562
    @jacklarrytairo5562 3 หลายเดือนก่อน +2

    Hello Tybul, I really liked the initial explanation of workload management was a great complement to understand this topic very well., I was reading the partitions in other Azure services such as stream analitycs or at the data warehouse level, it would be great to see them soon so we can see the differences because they are partitions at the physical level, thank you very much

    • @TybulOnAzure
      @TybulOnAzure  3 หลายเดือนก่อน +1

      I plan to cover partitioning in streaming

  • @vishwajeetbhangare380
    @vishwajeetbhangare380 3 หลายเดือนก่อน

    Hello Tybule, if you try reduce the gap between videos that will be very greatfull.. It will help me prepare and pass dp 203 as early as possible..your lectures are very lucid and to the point..Thank you😊

    • @TybulOnAzure
      @TybulOnAzure  3 หลายเดือนก่อน

      Hi, you can get early access to new videos by becoming a "Data engineer" member of my channel. Just follow this link: th-cam.com/channels/LnXq-Fr-6rAsCitq9nYiGg.htmljoin

    • @vishwajeetbhangare380
      @vishwajeetbhangare380 2 หลายเดือนก่อน

      Does it includes all the videos? Or it's just that i will get access to video a week earlier than anybody else that you are going to post ?

    • @TybulOnAzure
      @TybulOnAzure  2 หลายเดือนก่อน

      Week earlier

  • @kvsharish
    @kvsharish 3 หลายเดือนก่อน

    Hi Tybul, I have couple of questions:
    1) Is partitioning also associated with workload management or only with distributions?
    2)Is there any relationship between partitioning and workload management?
    3) Is partitioning you mentioned is same as horizontal and vertical partitioning ?
    4) sliding window here same as that used in stream analytics?
    Thanks for sharing your knowledge!
    Cheers!!

    • @TybulOnAzure
      @TybulOnAzure  3 หลายเดือนก่อน +2

      Hi, let me answer your questions:
      1) Workload management works no matter if you partitioned your data or not.
      2) No.
      3) Horizontal partitioning is about splitting the data by rows (so one partition has e.g. 1st million of rows, the next one has another million, etc.) while vertical one is about splitting the data by columns (so one partition has e.g. first 10 columns, another one has other 10 columns etc.). The partitioning that I was talking about is an example of horizontal partitioning (just like those 60 distributions) - so every partition has the same set of columns (all of them), but different rows.
      4) Kind of - in my case it meant that my table always stored data from a specific number of months, hence it is sliding. In future episode I'll explain how it works in case of Stream Analytics.

    • @kvsharish
      @kvsharish 3 หลายเดือนก่อน

      @@TybulOnAzure Thank you for replying so clearly :)

  • @ArunBaby-d4d
    @ArunBaby-d4d 17 วันที่ผ่านมา

    Hello Tybul, Thanks for those videos, it's really helpful as I am preparing for DP 203.
    I had a doubt about minimum number of rows that table should contain before we create partitions, I saw different answers for this in many forums. From your explanation what I understood that it's 60 million rows, right? See the full context of the question below.
    You plan to create a fact table named Table1 that will contain a clustered columnstore index.
    You need to optimize data compression and query performance for Table1.
    What is the minimum number of rows that Table1 should contain before you create partitions?

    • @TybulOnAzure
      @TybulOnAzure  17 วันที่ผ่านมา

      I'm unable to discuss this as it seems to align with content from exam braindumps, which goes against Microsoft's policy

  • @fazzujana7765
    @fazzujana7765 2 หลายเดือนก่อน

    Hi Tybul, thank you for your efforts. i can see 45 episodes on DP 203- and are there any episodes which are yet to come? or this is the end of the Dp 203 series?

    • @TybulOnAzure
      @TybulOnAzure  2 หลายเดือนก่อน

      Hi, there will be 53-54 episodes in total.

    • @fazzujana7765
      @fazzujana7765 2 หลายเดือนก่อน

      @@TybulOnAzure thank you and looking forward to it.

  • @swathi8273
    @swathi8273 2 หลายเดือนก่อน

    Hi Tybul, what is the best way to fix data skew?

    • @TybulOnAzure
      @TybulOnAzure  หลายเดือนก่อน

      Are you asking in the context of this episode?

  • @nagaharshavardhan8778
    @nagaharshavardhan8778 3 หลายเดือนก่อน

    Sir this course is enough for cracking data engineer interview

    • @TybulOnAzure
      @TybulOnAzure  3 หลายเดือนก่อน

      It depends on the level: for sure it is enough for a junior data engineer role, might be enough for a data engineer and probably is not detailed enough for senior data engineer position.