5: Netflix + YouTube | Systems Design Interview Questions With Ex-Google SWE

แชร์
ฝัง

ความคิดเห็น • 104

  • @laserbam
    @laserbam 6 หลายเดือนก่อน +21

    Thanks for doing this series! A few days ago, I signed my L5 offer at Google, so your system design videos (and slide decks) came in clutch

    • @jordanhasnolife5163
      @jordanhasnolife5163  6 หลายเดือนก่อน +8

      Hell yes dude, extremely proud of you, keep killing it!!

  • @Sfroment
    @Sfroment 5 วันที่ผ่านมา +1

    You really are doing the best system design videos I’ve ever seen ! Keep up the great work

  • @allenxxx184
    @allenxxx184 3 หลายเดือนก่อน +8

    Your channel deserves at least 1M subscribers. Most high-quality system design video!!!

  • @idiot7leon
    @idiot7leon 2 หลายเดือนก่อน +6

    Brief Outline
    00:01:04 Problem Requirements
    00:01:46 Capacity Estimates
    00:02:52 Video Streaming Intro
    00:04:00 Video Chunking
    00:05:40 Chunking Advatages
    00:07:09 Database Tables - Subscribers
    00:09:39 Database Tables - User Videos, Users, Video Comments
    00:11:33 Database Tables - Video Chunks
    00:12:45 Database Choices
    00:14:45 Video Uploads
    00:15:57 Video Uploading - Broker
    00:16:46 Video Uploading - Broker
    00:18:51 Video Uploading - Chunks
    00:20:27 Video Uploading - Chunk Storage
    00:22:32 Video Uploading - Aggregation
    00:26:41 Video Uploading - Streaming Datamodels
    00:28:37 Video Uploading - Flink
    00:31:15 Video Uploading - Flink Continued
    00:33:53 Video Uploading - Search
    00:34:59 Search Index - Partitioning
    00:37:17 Search Index - Partitioning Continued
    00:38:57 Search Index Uploads
    00:40:21 Final Diagram - Netflix/TH-cam
    Thanks, Jordan~

  • @sauravsingh5663
    @sauravsingh5663 3 หลายเดือนก่อน +4

    This is exactly what I was looking for. Love how you uncover the right level of detail where it is necessary.
    Great work !!

    • @dosya6601
      @dosya6601 7 วันที่ผ่านมา

      +

  • @MithunSasidharan1989
    @MithunSasidharan1989 6 หลายเดือนก่อน +8

    Thank you for continuing to do this. Its goldmine for engineers preparing for interviews : )

  • @wensongliu5058
    @wensongliu5058 หลายเดือนก่อน +1

    Much appreciation to you, Jordan. This video covers so many detailed components and processes going back and forth, I already watched this video for many times and it's really helpful!

  • @rahulnath9655
    @rahulnath9655 6 หลายเดือนก่อน +3

    This one is so dense and detailed, thanks man. I feel like I really understand these systems now.

  • @ky747r0
    @ky747r0 2 หลายเดือนก่อน +1

    42:36 Jordan man its been a long way... from your super wobbly handwriting in the 1st concepts video to this super beautiful amazing handwriting. And as always quality content!!!

  • @Luzkan
    @Luzkan 4 หลายเดือนก่อน +2

    Congratz on 21k Jordan! Its 5th video for me so far and I'm amazed every single time with the details u are manage to dwell into. For how long on average do you think about the whole system before starting the video itself (lets say without refining it to a someting presentable, just thought mapping it out)?
    14:39 / 41:40 - (In my design channel_id is the same thing as user_id) I'm wondering why do you suggest to shard on channel_id + video_id, rather than just video_id? I don't see how having close comments from other videos from a given user (channel) is helpful. 🤔
    24:49 - What happens if RabbitMQ dies after successful upload to S3 and just after messages have been put to the que with metadata (i know there is option for durable ques and persistent messages, but is that the way to go)?
    Btw, do you know how Discord handled the casual dependencies (relationships between messages like msg to msg replies) with Cassandra?

    • @jordanhasnolife5163
      @jordanhasnolife5163  4 หลายเดือนก่อน

      Hey! I'm basically remaking all of these videos right now, so I don't have to think about them for too long. I mainly just re-watch my old video on it and then try to decide if what I did last time was stupid haha.
      14:39 - yup, typo on my part nice catch.
      24:49 - Ideally we would have multiple replicas of rabbit mq so that if the leader dies the follower can take over and we can proceed as normal.
      I do not know the answer regarding discord! Maybe version vectors, maybe they always write to the same leader for a given parent comment Id, maybe quorums!
      I'd have to look into it.

  • @kword1337
    @kword1337 6 หลายเดือนก่อน +1

    Thanks for another banger dude! For complicated stuff like video aggregation, are you getting your ideas from white papers? Those level of designs seems beyond Designing Data Intensive Applications?

    • @jordanhasnolife5163
      @jordanhasnolife5163  6 หลายเดือนก่อน

      Well I don't feel like DDIA is ever super opinionated on how to design things in particular.
      That being said, real time aggregation using stream processing seems to be something used across many systems and it also handles pretty much all failure scenarios for us, hence the reason I keep abusing it haha

  • @alberdgdj1
    @alberdgdj1 6 หลายเดือนก่อน +1

    Hi Jordan, thanks for your videos they are of a huge value. I wonder if you could do a video about calculating BigO complexity with some exercises, that would be really helpful. Thanks mate!

    • @jordanhasnolife5163
      @jordanhasnolife5163  6 หลายเดือนก่อน +2

      I appreciate that! I can do this, however it realistically would be a while before I get to it, just due to the fact that I'm mainly trying to focus on systems design. That being said, there are many good resources on the internet for how to calculate this type of thing!

  • @dmitrigekhtman1082
    @dmitrigekhtman1082 4 หลายเดือนก่อน +3

    The upload and processing pipeline could include lots of different jobs with complicated interdependencies, with the S3 upload stage as one of the first steps. Possibly, a general-purpose workflow orchestration framework (something like Temporal, maybe?) could help coordinate all of it.

    • @jordanhasnolife5163
      @jordanhasnolife5163  4 หลายเดือนก่อน

      Agreed, and I imagine that IRL they do probably have something like this!

    • @user-of5je3sp9n
      @user-of5je3sp9n 4 หลายเดือนก่อน +1

      You should do a video on workflow orchestration :D

  • @Randomguu
    @Randomguu 5 หลายเดือนก่อน +2

    Wonderful series, cannot stop watching. Just one question on something which is bugging me- I heart this suggestion in a few of the other videos as well, how do you decide that sql db will be better as we have a read heavy system. I understand the btree vs lsm tree point but nosql scales better hence will have less locking on a single contention to a single sql node ( even if we have master slave for reads - still scale poorly no?). I think lsm vs btree is merely theoritical discussion rather than having pratical application here

    • @jordanhasnolife5163
      @jordanhasnolife5163  5 หลายเดือนก่อน

      You say "NoSQL" scales better - what makes you say this? That's really only the case when we're running a bunch of distributed joins, which we aren't doing in any of this reads

  • @siddharthgupta6162
    @siddharthgupta6162 5 หลายเดือนก่อน +1

    Thanks for the video, Jordan. Awesome content as always.
    Is there any difference between streaming vs chunking? I read somewhere that streaming is an error-prone process so one should prefer chunking over it - but there was no explanation on it.
    Any thoughts on this?

    • @jordanhasnolife5163
      @jordanhasnolife5163  5 หลายเดือนก่อน

      Yeah to tell you the truth no clue - sounds like some guy spewing some bs as per usual with 99% of systems design videos lol

    • @siddharthgupta6162
      @siddharthgupta6162 5 หลายเดือนก่อน

      @@jordanhasnolife5163 lol sounds about right

  • @nirajvora9314
    @nirajvora9314 6 หลายเดือนก่อน +2

    Don't stop making videos bro. Your content is unique and effective.

  • @user-wj1wy6ph5q
    @user-wj1wy6ph5q 5 หลายเดือนก่อน +2

    Awesome design 🙇

  • @xiaoyinqi7296
    @xiaoyinqi7296 3 หลายเดือนก่อน +1

    Thanks for the video, Jordan, very impressive.
    want to understand the reason using Flink here, I know Flink is a streaming processing tool. I believe we want to confirm if the transcoding of all the chunks is done. my thought is to use chunk db table to mark each chunk's status.

    • @jordanhasnolife5163
      @jordanhasnolife5163  3 หลายเดือนก่อน +1

      You can definitely use a chunk db. However, note that this means:
      1) You need to make an additional network request to the chunk db every time
      2) That request can fail, how do you ensure that we eventually write it there?

  • @college7290
    @college7290 5 หลายเดือนก่อน +1

    Real treasure! Thank you. What resources did you use to learn these concepts? I know your knowledge is not out of books, but based on years of hard work and experience. how I can start learning these concepts myself? What can I do to be knowledgeable like you in next 5~10 years?

    • @jordanhasnolife5163
      @jordanhasnolife5163  5 หลายเดือนก่อน +1

      Just reading haha, I'm nothing special! You'd be surprised how much you can learn by looking at "Uber system design" from reputable sources (their site and not TH-camrs)

  • @dinar.mingaliev
    @dinar.mingaliev 6 หลายเดือนก่อน +1

    Hi Jordan, thank you so much for for keeping us educated and sharing your ideas in system design. Short question: dont we also need to add chunk processor, once a user uploaded a video into temporary S3 or DFS, the service splits it into chunks.
    And meanwhile one more question: if we have single leader replication + partitions in Cassandra, will it work with comment editing right?
    And also we need a service to create a user feed :)

    • @dinar.mingaliev
      @dinar.mingaliev 6 หลายเดือนก่อน +1

      also I guess for insert, updated and delete operation on a single row are atomic, isolated and durable in Cassandra and assuming that the same user edits its comments - there should not be a problem with eventual consistency. what do you think man? :)

    • @jordanhasnolife5163
      @jordanhasnolife5163  6 หลายเดือนก่อน

      Thanks!
      I had envisioned the user's client breaking the file into chunks.
      Secondly, I'd agree that edits of comments are no issue if we use single leader replication, but for multi leader replication they definitely could be!

  • @xRuneGunx
    @xRuneGunx 5 หลายเดือนก่อน +1

    In 41:31 you mentioned using Cassandra increases write throughput. However, doesn't Cassandra use a Leaderless replication model such that write availability is increased? I was under the impression that multiple leader replication increases write throughput due to its nature of processing events in parallel. Can you clear up my confusion?
    Thanks for the video

    • @jordanhasnolife5163
      @jordanhasnolife5163  5 หลายเดือนก่อน

      Yes sorry and good catch here. Cassandra can be run in multiple different configurations: one with quorum consistency, and another where writes just need to hit one node. I'm mainly referring to the latter, which is effectively multi leader replication.

  • @meenalgoyal8933
    @meenalgoyal8933 2 หลายเดือนก่อน +1

    Hey Jordan, I am wondering how the design might change for audio streaming service like Spotify. I think a lot might remain same as youtube, but 2 major things:
    1. Do you think we need to break audio file into chunks? Sure we can benefit from parallel uploading and getting one chunk at a time for streaming but audio files are lighter than video.
    2. What kind of processing might be required for each audio file chunk?

    • @jordanhasnolife5163
      @jordanhasnolife5163  2 หลายเดือนก่อน

      Hey! I think 99% of it is probably going to be the same. You'd probably have different bit rates for streaming the audio if you have a worse connection, which is the processing involved. Maybe you wouldn't need chunking since as you mentioned the files are much smaller in size.

  • @indraneelghosh6607
    @indraneelghosh6607 4 หลายเดือนก่อน +1

    Hi Jordan. Had a few questions related to the video upload flow. Could you please explain why you chose RabbitMQ over Kafka while uploading the metadata? Also, there may be times when there may be a spike in the amount of videos being uploaded particularly in the case of a TH-cam-like system. I would expect video uploading on youtube would have a rather irregular traffic pattern as compared to a streaming platform like Netflix. Any ideas on how to tackle these spikes without manual intervention?

    • @jordanhasnolife5163
      @jordanhasnolife5163  4 หลายเดือนก่อน +1

      To be honest, I do think that the uploading on TH-cam would be more regular than you think. You've got people in every timezone. But yeah, I guess the way you'd do it is just have your consumers that are doing the encoding be part of some hadoop cluster that also is performing other work in the meantime, and as more jobs come in for uploads you can kill whatever jobs those nodes are currently doing and use them for uploads.
      For your first question, RabbitMQ is going to allow me to use a fan out design such that I don't need a bunch of different partitions (one per consumer) as I would with kafka. I don't care about message ordering at all here, so a fan out is fine.

  • @9527-ljc
    @9527-ljc 5 หลายเดือนก่อน +1

    Thanks, this is great content. For entry lvl sde, which part should we focus more in SD interview?

    • @jordanhasnolife5163
      @jordanhasnolife5163  5 หลายเดือนก่อน

      If you're looking for junior roles, I'd honestly just keep grinding leetcode haha.
      Otherwise, I'd say that the whole video is still relevant. Can't hurt to learn!

  • @adithyabhat4770
    @adithyabhat4770 5 หลายเดือนก่อน +1

    Thanks Jordan!

  • @ravi72munde
    @ravi72munde 5 หลายเดือนก่อน +1

    For processing chunks, is it possible be to use Kafka + spark so each spark job handles single video but processes it’s chunks on multiple workers and at the end marks the job completed when all chucks are processed. Making keeping of state of the video’s chunks redundant.

    • @jordanhasnolife5163
      @jordanhasnolife5163  5 หลายเดือนก่อน +1

      A couple of concerns here that you'd have to address:
      1) how do we know when to trigger the spark job?
      2) You're triggering a lot of spark jobs haha
      In practice, this may work! I think we'd have to try it out.

    • @ravi72munde
      @ravi72munde 5 หลายเดือนก่อน

      Good point! How about if you could use Kafka queue to queue jobs. Message would just contains the videoID which has chunks ready to process. A consumer could act as a spark streaming(master) node. Picks available message, fetches all the chunks_ids/fileurls for that video and distributes chunks to worker nodes. Once all chunks are processed the master node would know and mark the video as complete. As an advantage it’ll be easy to track which video failed rather than chunks.

  • @ariali2067
    @ariali2067 3 หลายเดือนก่อน +1

    Again, sorry same question caught me again and again. Is search index basically building a new table or basically a secondary index to existing user video table? I already convinced myself that it's a secondary index on top of existing tables, but then this video it seems that we are creating a new table (with some denormalized data from user video table) -> if this is the case (create a new table) -> why we need (user id, video id) as partition key here? Why we cannot use term as partition key such that for a given term search all the results are on the same node for faster read speeds? This really bothered me.. really appreciate if you can help clear my confusion here, thanks again!

    • @jordanhasnolife5163
      @jordanhasnolife5163  3 หลายเดือนก่อน

      1) new table
      2) too many much data for a given term typically, imagine for "Donald trump"

  • @aforty1
    @aforty1 หลายเดือนก่อน +1

    Liked and comment for the also! Thank you!

  • @ankitagarwal4022
    @ankitagarwal4022 หลายเดือนก่อน +1

    @jordanhasnolife5163 Hi Jordan, I have just one question, your processor is transforming the video into a list of transforming videos, it will depend on the number of encodings * resolutions.
    let's say for example we have 10 encodings and 4 resolutions. it will make it 40. So we have to transform on 1 chunk into 40 and upload into 40 into s3.
    I assume transforming one chunk to another itself a heavy process. Can you suggest some optimization here? if our event processing fails so we don't have to transform every chunk from the beginning.

    • @jordanhasnolife5163
      @jordanhasnolife5163  หลายเดือนก่อน

      I'm pretty confused what you mean here - each resolution/encoding is processed independently in tandem already, so if one fails the rest do not fail, feel free to elaborate!

    • @ankitagarwal4022
      @ankitagarwal4022 หลายเดือนก่อน +1

      @@jordanhasnolife5163 what I understand about the flow of data
      1. first we are uploading chunks is S3, lets say (c1,c2,c3.....)
      2. adding chunk details in broker (rabbitmq)
      3. The processor consumes chunk details from the broker let's say C1 and puts a list of transformed (C1R1E1, C1R1E2, C1R1E3, C1R2E1,C1R2E2,C1R2E3)video into the S3 considering (resolutions(R) = 2, encoding(E) = 3 ). and processor also put list details into flink.

    • @jordanhasnolife5163
      @jordanhasnolife5163  หลายเดือนก่อน

      @@ankitagarwal4022 The only transformation of one chunk to another that we're doing right at the start is creating the list of all of the metadata that we will eventually need to create. So that can all go into rabbit mq, and once it does we can be fairly confident that the chunk will eventually be created downstream because it will only get removed from rabbit mq once the consumer puts the completion message in kafka

  • @roshankumar0911
    @roshankumar0911 6 หลายเดือนก่อน +2

    I recently cleared my system design round after watching ur videos..it's so compact & precise. Thank you for making such videos. Can you please mention your linkedin id ?

    • @jordanhasnolife5163
      @jordanhasnolife5163  6 หลายเดือนก่อน

      Glad to hear!! Congrats!
      www.linkedin.com/in/jordan-epstein-69b017177?
      If you don't mind, just don't tag me in stuff so that I don't lose my job haha

    • @roshankumar0911
      @roshankumar0911 6 หลายเดือนก่อน

      @@jordanhasnolife5163 Sure, thanks :)

  • @rakeshvarma8091
    @rakeshvarma8091 2 หลายเดือนก่อน +1

    You Are Awesome Bro!!

  • @saurabhmittal6947
    @saurabhmittal6947 10 วันที่ผ่านมา +1

    hey jordan, I have one question.. how is client able to uniquely generate the chunk-id and video-id because here, you are showing that client will be uploading to s3 and then sending that data to upload-service but who is assigning unique-ids to all these entities flowing in our system ?

    • @jordanhasnolife5163
      @jordanhasnolife5163  8 วันที่ผ่านมา

      The video id can just be some userId + a hash or something. The chunk ID is also basically a hash and just needs to be unique per video id

  • @vigneshraghuraman
    @vigneshraghuraman 7 วันที่ผ่านมา +1

    once the chunks are uploaded by the user to S3, how does upload service know which chunks to put on the rabbit MQ? is this done via S3 notifications to the upload service?

    • @jordanhasnolife5163
      @jordanhasnolife5163  6 วันที่ผ่านมา

      The client will upload chunks based on which ones are "new". Then they all go into rabbit mq.

  • @truptijoshi2535
    @truptijoshi2535 22 วันที่ผ่านมา +1

    Hi Jordan, can CDC have a single point of failure? If yes, how do we avoid? Also does CDC add extra latency?

    • @jordanhasnolife5163
      @jordanhasnolife5163  22 วันที่ผ่านมา

      I mean in theory kafka, but I tend to imply that our Kafka cluster has replicas.
      CDC does make things slower, but I suppose in the cases where I use it I don't actually care (hence why I use it)

  • @weijiachen2850
    @weijiachen2850 หลายเดือนก่อน +2

    How does this guy know all these as a junior engineer? He should be promoted to a staff engineer.

    • @jordanhasnolife5163
      @jordanhasnolife5163  หลายเดือนก่อน

      Very unclear if I have what it takes for that

  • @niapuchun
    @niapuchun 3 หลายเดือนก่อน +1

    The page at time 2:10th min the last line should say 1 million videos..isn’t it ?

  • @calvincruzada1016
    @calvincruzada1016 6 หลายเดือนก่อน +1

    Awesome

  • @JulianA-rm4ry
    @JulianA-rm4ry หลายเดือนก่อน +1

    Thank you Jordan

    • @JulianA-rm4ry
      @JulianA-rm4ry หลายเดือนก่อน +1

      Now i'm only 1/2 screwed

  • @vorandrew
    @vorandrew 6 หลายเดือนก่อน +1

    Chunking stuff question... Why would you want to store chunks except in cache? Let's say video is 50Mb, you want save permanently transcoded 3-4 resolutions x 1-2 formats? Petabyte here petabyte there and we are talking about big numbers... If you always can re-create them - no need to store transcodes for video that was last viewed 3 years ago... cache them with last-access timeout set to 1 week for example... Maybe you want to store first chunk for fast access at maximum

    • @jordanhasnolife5163
      @jordanhasnolife5163  6 หลายเดือนก่อน

      Would appreciate if you could elaborate here! While it's true that we could store the entire video file and never deal with any chunks, assuming we originally upload chunks to S3 when first uploading the file we'll always need at least some chunk metadata in our database to load them

    • @vorandrew
      @vorandrew 5 หลายเดือนก่อน +1

      @@jordanhasnolife5163 my guess is like this - we are receiving file of original resolution -> chunking it by 2 sec -> long term storage. Transcode first chunk into 144,240,360,480 etc resolutions (don't store) -> CDN expiration = 1Y last access (just to have fast start experience). Whenever somebody starts to watch video we transcode necessary resolution on the fly from original chunks in parallel and store it in CDN expiration = 1 week. I'm sure sum of transcode speed will be faster than viewing speed so we will make viewing seamless
      Regarding metadata - as you said during upload we can store all necessary chunking stuff in some nosql db

    • @vorandrew
      @vorandrew 5 หลายเดือนก่อน +1

      Than you for your videos! ❤ after viewing some I can see your designs tend to give out space as FED is printing money 😂

    • @jordanhasnolife5163
      @jordanhasnolife5163  5 หลายเดือนก่อน

      @@vorandrew Ah I see what you're saying here, I think it's one of those things that we'd have to actually try out and see if the latencies would be low enough. We do care a lot more about lowering read latencies here, so I wonder if this would work in practice but it's an interesting thought!

    • @jordanhasnolife5163
      @jordanhasnolife5163  5 หลายเดือนก่อน

      @@vorandrew Haha yeah - my personal philosophy here is to use as much disk space as needed, we could always optimize for cost saving measures in the future! At least for the interview I don't know how often it would come up, but it's possible!

  • @rahulrachh3320
    @rahulrachh3320 3 หลายเดือนก่อน +1

    Video Timestamp: 10:18
    Part-1:
    For the user Videos table, We can omit timestamp as UserId+VideoId make a unique pair and when you get the videos from the table, you get timeStamp and then you sort them and display the videos for a user who uploads videos. Correct me If I am wrong.
    Part-2:
    Also, in the Video Comments table, VideoId will be unique so why are we using timestamp along with this. Does this help in getting output in sorted manner ?
    Thanks :)
    Edit: Added Video Timestamp

    • @jordanhasnolife5163
      @jordanhasnolife5163  3 หลายเดือนก่อน +1

      1: Definitely doable, however it is easier to keep things pre-sorted by timestamp in the metadata database so that you don't have to sort them on the fly for each read.
      2: You answered your own question :). Having a timestamp for comments allows us to easily fetch comments in a pre sorted order, as we can index those comments on timestamp per video.

    • @rahulrachh3320
      @rahulrachh3320 3 หลายเดือนก่อน

      @@jordanhasnolife5163 Thank you :) I love this series and System Design 2.0. This got me thinking of starting my own series on System Design topics. Maybe one day for sure :)

    • @rahulrachh3320
      @rahulrachh3320 3 หลายเดือนก่อน +1

      @@jordanhasnolife5163 Thanks got it. This series and System Design 2.0 are gold. I might even start making videos on similar topics sometime sooner :)

    • @jordanhasnolife5163
      @jordanhasnolife5163  3 หลายเดือนก่อน +1

      @@rahulrachh3320 Just don't take too many of my viewers away from me it's all I've got ;)

    • @rahulrachh3320
      @rahulrachh3320 3 หลายเดือนก่อน

      @@jordanhasnolife5163 haha, I'll try not to take the viewers ;)

  • @davidabu3170
    @davidabu3170 3 หลายเดือนก่อน +1

    you forgot the userId in the table it is quite important

  • @imutkarshy
    @imutkarshy 6 หลายเดือนก่อน +8

    Your obsession with Flink 😅

    • @jordanhasnolife5163
      @jordanhasnolife5163  6 หลายเดือนก่อน +3

      They should be paying me
      Oh wait it's open source

    • @imutkarshy
      @imutkarshy 6 หลายเดือนก่อน

      @@jordanhasnolife5163 Wait till they open a company like Confluent from this.

    • @sauravkumarsharma6812
      @sauravkumarsharma6812 2 หลายเดือนก่อน +1

      @@jordanhasnolife5163😂

  • @zhonglin5985
    @zhonglin5985 2 หลายเดือนก่อน +1

    At th-cam.com/video/43bB7oSn190/w-d-xo.html, another queue is needed to stream total chunk count to Flink. This look a bit redundant to me. Why don't we just include total chunk count as an extra field of events that are sent to RabbitMQ?

    • @jordanhasnolife5163
      @jordanhasnolife5163  2 หลายเดือนก่อน

      Totally doable as well, I considered this approach too. I mainly assumed there'd be a lot of other metadata around and didn't wanna bloat the messages.

  • @ankitagarwal4022
    @ankitagarwal4022 2 หลายเดือนก่อน +1

    @jordanhasnolife5163 thank you for your video. love your content