Exceptional....eas never knowing about DLT ...... really made my day with feeling that i learned something new today....i anyways always keep watching many of ur posted vidoes.... thanks for ur efforts for sharing ur knowledge
Another scenario is: why not to wait for the external service to wake up so that we can resume the processing. This way we can avoid one drawback of the earlier approach. which is as follow: for one entity we got error and we pushed it to dlt. but we got another message for the same entity and this was processed successfully. now when the dlt msgs will be processed, this will update the entity as per previous data which will create data inconsistency. Waiting for the service to wake up will ensure two things. 1. safe guard the chronology of the events 2. no unnecessary consumption and retry and then publishing to dlt. This is my observation. I would like to hear from you on this. Thank you sir.
I really appreciate your interest and I will continue buddy that I need enough time for presentation and pieces of code so please help me to help you out .
Hi Satya the data inconsistency scenario you are telling when consumer related resources are unavailable but I believe DLT error topics are usually helpful to investigate / analyse the root cause for failure messages like NPE, Array indexOutOfMemory etc..not for reprocessing the DLT messages again.
Hi Brother: Nice video! The way you explained everything is exceptional. Keep up the great work! However, I encountered an issue where I couldn't send the message to the DLQ until I threw an error in the catch block. The existing blocking didn't trigger it, so it didn't work for me. I'm sharing this in case it helps someone with a similar problem. Also, please consider using docker-compose for Kafka in future videos. Code I changed: Instead of "e.printstackTrace" in catch block, I used "throw e";
Hello, please make video on Spring Boot Hexagonal Architecture, lot of company's are using as modern development, i struggle a lot still don't understand entire structure.
tq bro for ur videos providing good knowledge to us and i have questions which aske in recent ineterview asked what are locks in spring ,where u have used singleton pattern in ur project, and idempotent and hope u will provide answer for this questions
thank you sir for your clear explanation. I have one question here why we are creating multiple retry topics here although we already have DLT topic to track the failure message. Can't we reuse the same topic for retry?
Hello. Thank you for your clear explanation. When i tired in my local machine with the Retry and DLT mentioned configurations with the spring Boot 3.3.2 + kafka(In built version 3.7.1) version the excepted output is not working. Can you please help me on that.
Hi Basanth, if possible can you please make a video on message delivery semantic like only once, atmost once, atleast once and how to avoid duplicate messages and consumer side if application is running on 2 to 3 pods. Thankyou!
what happened to DLT-topic when exception record is recorded inside it ? Do programmer need to manually retry from this topic or it is taken care by Kafka?
No you need to create another publish method to check record from DLT and process it with existing flow . But before that you need to identify why its failed first time If it's steal data then you need to discard those failed events and re process others
@Javatechie how to process messages in DLT ? do we need another consumer to process messages from DLT? how Kafka knows whether the message is processed or not?
@@Javatechie thank you for answering. I had done multiple tries but I always struggle with the test classes. At the end I stayed with producer configuration retries suggested by Kafka but still got no lucky with tests
Hey thanks for the video! So we have topic called for example, "myTopic" and a DLT set up like this "myTopic.DLT". It is my undertsanding that kafka will just add "-DLT" to the end of your topic name is that correct? and if so is there a way to make it add ".DLT" instead? It was a another team that named them so we have to work around that.
Sir if my kafka is down if I pushed message to the consumer and using retrieval method i retry till 15mt and when kafka is start in between of 15mt so it will work?
Thnq i did. but now my problem is if i push message from producer and i will hold execution using debug and then when I shutdown kafka and realise the debug then i started kafka again so kafka producer try to push message continuously and when i started kafka message also produced but in this case consumer does not recived these messages and error comes in console.
i got interviewed today and DLT been asked, now my concept is absolutely clear, thanks for this amazing stuff 😍😍
Exceptional....eas never knowing about DLT ...... really made my day with feeling that i learned something new today....i anyways always keep watching many of ur posted vidoes.... thanks for ur efforts for sharing ur knowledge
Glad that it helps you. Keep learning 😃
Another real time video from you sir. Thank so much sir for your hard works
Great work , exactly what i have been looking for , thanks a lot for the hard work in bringing this tutorial .
This guy literally explains everything in very simple way 👍
Thank you so much for your great work. I completed this series!
Thanks a lot for these amazing tutorials! I learned a lot from your videos.
Appreciate your efforts Basant. God bless you❤😊every week waiting for new updates…
Another scenario is: why not to wait for the external service to wake up so that we can resume the processing. This way we can avoid one drawback of the earlier approach. which is as follow:
for one entity we got error and we pushed it to dlt. but we got another message for the same entity and this was processed successfully. now when the dlt msgs will be processed, this will update the entity as per previous data which will create data inconsistency.
Waiting for the service to wake up will ensure two things.
1. safe guard the chronology of the events
2. no unnecessary consumption and retry and then publishing to dlt.
This is my observation. I would like to hear from you on this. Thank you sir.
Good observation and agree with you 🙂
Please continue the interview series. Waiting for so long @@Javatechie
I really appreciate your interest and I will continue buddy that I need enough time for presentation and pieces of code so please help me to help you out .
Hi Satya the data inconsistency scenario you are telling when consumer related resources are unavailable but I believe DLT error topics are usually helpful to investigate / analyse the root cause for failure messages like NPE, Array indexOutOfMemory etc..not for reprocessing the DLT messages again.
Thanks a lot on good work ! As usual this video is always informative and practical
Excellent content... As always, thanks alot Sir.. ,👍🏻
thank you sir for your clear explanation. ..
Hi Brother: Nice video! The way you explained everything is exceptional. Keep up the great work! However, I encountered an issue where I couldn't send the message to the DLQ until I threw an error in the catch block. The existing blocking didn't trigger it, so it didn't work for me. I'm sharing this in case it helps someone with a similar problem. Also, please consider using docker-compose for Kafka in future videos.
Code I changed:
Instead of "e.printstackTrace" in catch block, I used "throw e";
Hello, please make video on Spring Boot Hexagonal Architecture, lot of company's are using as modern development, i struggle a lot still don't understand entire structure.
Okay sure i will do that
tq bro for ur videos providing good knowledge to us and i have questions which aske in recent ineterview asked what are locks in spring ,where u have used singleton pattern in ur project, and idempotent and hope u will provide answer for this questions
All your doubts are already answered in the QA series video.
Great work sir. Thanks again
thank you sir for your clear explanation. I have one question here why we are creating multiple retry topics here although we already have DLT topic to track the failure message.
Can't we reuse the same topic for retry?
Yes we can override this behaviour but needs to check this configuration
😢😅😮😂
Sir ji ki jai ho 🙏
Hello. Thank you for your clear explanation. When i tired in my local machine with the Retry and DLT mentioned configurations with the spring Boot 3.3.2 + kafka(In built version 3.7.1) version the excepted output is not working. Can you please help me on that.
Naga can you please connect over javatechie4u@gmail.com
Sure I will share the piece of code in the mail
@basant, please make video what is partition key and how will data process producer to partition.
I already explained this please checkout my kafka Playlist
What do we do with this DLT list of data, do we address them manually or any standard solutions to perform in real time prod projects?
You can fix and republish
@@Javatechie Thanks for quick response
Thanks a lot sir from bangalore ❤🙏
Great work❤
Thanks a lot on good work !
Hi Basanth, if possible can you please make a video on message delivery semantic like only once, atmost once, atleast once and how to avoid duplicate messages and consumer side if application is running on 2 to 3 pods. Thankyou!
It's a good suggestion thanks will plan it
what happened to DLT-topic when exception record is recorded inside it ? Do programmer need to manually retry from this topic or it is taken care by Kafka?
@javatechie i have same question
No you need to create another publish method to check record from DLT and process it with existing flow . But before that you need to identify why its failed first time If it's steal data then you need to discard those failed events and re process others
@Javatechie how to process messages in DLT ?
do we need another consumer to process messages from DLT?
how Kafka knows whether the message is processed or not?
great bro awesome
once message pushed to DLT topic, how we can reprocess those failed events if needed ?
You need to pull and republish then
please explain duplicate message for kafka consumer
as we know db throughput is comparatively low in such sceneario how can we balance consumer throughput as per db?
Hey guys, I need to implement a retries when producing to Kafka and its related tests. Do you have references to accomplish this?
I don't have video on it but the solution is straight forward you can use spring retry directly in your producer code
@@Javatechie thank you for answering. I had done multiple tries but I always struggle with the test classes. At the end I stayed with producer configuration retries suggested by Kafka but still got no lucky with tests
Sir ji ki jai ho
Nice, but why you are using Producer and Consumer separately, if we use KafkaStream then it will automatically handle both scenarios
Not getting you buddy could you please add some more inputs
Do you have an explanation for publisher retries?
Hey thanks for the video! So we have topic called for example, "myTopic" and a DLT set up like this "myTopic.DLT". It is my undertsanding that kafka will just add "-DLT" to the end of your topic name is that correct? and if so is there a way to make it add ".DLT" instead? It was a another team that named them so we have to work around that.
Great thanks
Is the implementation and configuration same for Kafka producer ?
No for the producer it's different
@@Javatechie can u please suggest/advise me how to do for producer part ?
whats the difference between part1 and part2 of apache kafka course.can someone please let me know?
Thank you
Hi Sir, Could you please share the CSV file as well in the github link? Thank you in advance.
Sir if my kafka is down if I pushed message to the consumer and using retrieval method i retry till 15mt and when kafka is start in between of 15mt so it will work?
Yes in the first attempt only the consumer listens because check in consumer properties we have defined fetch type earliest
Is there any retry in producer like comsumer?
@@tejastipre9787 hello yes we can implement spring retry in producer side as well
Thnq i did.
but now my problem is if i push message from producer and i will hold execution using debug and then when I shutdown kafka and realise the debug then i started kafka again so kafka producer try to push message continuously and when i started kafka message also produced but in this case consumer does not recived these messages and error comes in console.
great
pelase add this in a playlist
It is there in Kafka playlist
why you don't use kraft
Sir ji ki jai ho 🙏