Hi Siva, I have worked on example. I am unable to get 2 success files and 1 failure file. But I am able to get 1 success file and 1 failure file only . Please tell me where i did the mistake
Either re-try mechanism or publish on some Q for later processing, but if DB is consistently down for allr records, then using maxFailedRecords ,handling-logic can be added accordingly (depending on requirement)
This is working fine when i passed all integer value as one array ,But when i passed string value middle of array ..it stop execution after that string value
Same issue I faced. I used another Batch Step to capture only error using the accept policy ACCEPT_FAILURES and published errors into the VM Queue. Then it is working fine
["2", "4","abc","8","12","14"] with this input ...abc is also coming in success file ..as well as in error file. ,I am trying to exclude this invalid record from success file...any clue
That's the idea. Once you have the specific record in the error or failed folder or VM you can do any process as you require. If required you can introduce aggregator and process them collectively. In my view it's not a great design when you postpone or processing failures collectively.
@@sivathankamanee-channel Actually you don't have to provide error handling in Batch Step 1; You set max error records to -1 for Batch Job; Then introduce Batch Step 2 to process ONLY_FAILURES records policy.
@@sivathankamanee-channel Thank you so much for preparing and presenting such a wide range of topics in Mule. This is no easy task as a working professional. Hats off to you. Batch internally flags the failed records and makes it available in the subsequent batch steps. We can access the failed records using the "Batch Filter" - Accept Policy attribute. By introducing VM queues not only we are introducing more components/complexity, but it will also add to performance overhead. Also, we are not postponing handling failures as Batch is asynchronous and multi-threaded and batch does not wait for all records to be processed in one step, it keeps moving records to the next steps (asynchronous). The whole point of batch processing is to perform bulk (collective) processing which includes handling any failures. Flows/Approach should be simple by design while leveraging all features that Mule provides us.
Hi Sir, For example if it fails inside Batch aggregator try won't handle it right? And there are options to select in batch step like NO_FAILURES, ONLY_FAILURE... How can we use that in better way? Please reply.. as this is one of my requirements
Bcoz ForEach doesn't uses aggregator(uses 1 record individually) & Batch aggregator aggregates all records & then creates 1 file at once (for all) - Write is in BA in above use-case
After the On Error Continue, the failed record is as well passed into the batch aggregator. So all 6 records are being created in success file and 1 record is created in failed file. Any idea on how to fix?
Hi Saiteja - It depends on the use case. The error records are critical for transaction and aggregation is for historical purposes. It looks perfectly ok. What use case you are trying to simulate?
@@sivathankamanee-channel Just wanted to inform you that error records are being aggregated in the batch aggregator as well. (In your aggregator, you see 3 first time, 3 second time, but I see 3 first and second time) Donno why this is happening though. But using filter function I removed them and continued processing the success ones Maybe change in version caused this. Anyways thanks for quick reply.
@@saitejam1614 Hi, I was about to comment this. Nice catch! This is happening because Siva has used On Error Continue. Be default(even if you don't handle error in a Batch Step), if a record fails, it's not considered for aggregation. But when you are using On Error Continue, Try scope will pass it on as a success without error to next component which is aggregator stage in this case, and hence it will be aggregated.My guess is whatever the payload is returned after Publish, that 'returned payload' will be aggregated with other records and would be present in one of the success files. Siva has shown one success file, when he showed the On Error Continue example. Not both. I'm sure that 'returned payload' was lying in the other success file after being aggregated. If you truly wanted the isolation of error records, I find On Error Propagate the best candidate here. Because anyways, errors will not disturb rest of the batch job. Error records would be published to VM Queue and not aggregated(and hence not written to success file). Also better to have Max Failed Records as -1 for the Batch Job Configuration so that Batch Job don't stop in case you more than one batch steps. @sivathankamanee-channel Please correct me if my understanding is wrong.
Thanks for the video Can you make a video on jenkins CI/CD for onprem mule stand-alone servers I saw a video on cloud auto deployments but not on onprem
Nice knowledge sharing Siva
Excellent! Very neat with the try catch.
Great sir🔥✌️ one of the important topic in Mule 4
For each also iterates one by one ...then what is difference between for each and batch other than single thread and multi thread @ siva thankamanee
Hi Siva, I have worked on example. I am unable to get 2 success files and 1 failure file. But I am able to get 1 success file and 1 failure file only . Please tell me where i did the mistake
Hi Raju - thanks for your appreciation. Did you check the batch aggregator size set to 3?
yes
in success scenario i am able to get 2 success files
Hi Siva - i'm also facing same issue. Mule version is 4.2.2
Same issue for me as well on 4.2.2
It worked well on 4.2.0
Good one, I do appreciate your contribution
Thanks Yogandhar !!
Sir here instead of file if we are writing into database if the data base down while writing the records what happens there how we need to handle data
Either re-try mechanism or publish on some Q for later processing, but if DB is consistently down for allr records, then using maxFailedRecords ,handling-logic can be added accordingly (depending on requirement)
This is working fine when i passed all integer value as one array ,But when i passed string value middle of array ..it stop execution after that string value
Same issue I faced. I used another Batch Step to capture only error using the accept policy ACCEPT_FAILURES and published errors into the VM Queue. Then it is working fine
Check maxFailedRecords once, above option is also good
["2", "4","abc","8","12","14"] with this input ...abc is also coming in success file ..as well as in error file. ,I am trying to exclude this invalid record from success file...any clue
That's a nice catch. I had the same doubt. You can check my reply comment on comment posted by @saitejam1614
how about processing failed records in separate Batch step that processes only failed records
That's the idea. Once you have the specific record in the error or failed folder or VM you can do any process as you require. If required you can introduce aggregator and process them collectively. In my view it's not a great design when you postpone or processing failures collectively.
@@sivathankamanee-channel Actually you don't have to provide error handling in Batch Step 1; You set max error records to -1 for Batch Job; Then introduce Batch Step 2 to process ONLY_FAILURES records policy.
@@sivathankamanee-channel
Thank you so much for preparing and presenting such a wide range of topics in Mule. This is no easy task as a working professional. Hats off to you.
Batch internally flags the failed records and makes it available in the subsequent batch steps. We can access the failed records using the "Batch Filter" - Accept Policy attribute. By introducing VM queues not only we are introducing more components/complexity, but it will also add to performance overhead. Also, we are not postponing handling failures as Batch is asynchronous and multi-threaded and batch does not wait for all records to be processed in one step, it keeps moving records to the next steps (asynchronous). The whole point of batch processing is to perform bulk (collective) processing which includes handling any failures.
Flows/Approach should be simple by design while leveraging all features that Mule provides us.
@@hellothere848 Yes, it can be done this way as well,
Excellent video!
Thanks, Timm !! :)
Hi Sir, For example if it fails inside Batch aggregator try won't handle it right? And there are options to select in batch step like NO_FAILURES, ONLY_FAILURE... How can we use that in better way? Please reply.. as this is one of my requirements
Yes, maxFailedRecords as -1, & new BS with ALL failed records can be used for error-handling records
i am using for each and i see that for each error record it is creating a error record file ..what am i doing wrong
Bcoz ForEach doesn't uses aggregator(uses 1 record individually) & Batch aggregator aggregates all records & then creates 1 file at once (for all) - Write is in BA in above use-case
After the On Error Continue, the failed record is as well passed into the batch aggregator.
So all 6 records are being created in success file and 1 record is created in failed file.
Any idea on how to fix?
Hi Saiteja - It depends on the use case. The error records are critical for transaction and aggregation is for historical purposes. It looks perfectly ok. What use case you are trying to simulate?
@@sivathankamanee-channel
Just wanted to inform you that error records are being aggregated in the batch aggregator as well.
(In your aggregator, you see 3 first time, 3 second time, but I see 3 first and second time)
Donno why this is happening though.
But using filter function I removed them and continued processing the success ones
Maybe change in version caused this. Anyways thanks for quick reply.
@@saitejam1614 Hi, I was about to comment this. Nice catch!
This is happening because Siva has used On Error Continue. Be default(even if you don't handle error in a Batch Step), if a record fails, it's not considered for aggregation. But when you are using On Error Continue, Try scope will pass it on as a success without error to next component which is aggregator stage in this case, and hence it will be aggregated.My guess is whatever the payload is returned after Publish, that 'returned payload' will be aggregated with other records and would be present in one of the success files. Siva has shown one success file, when he showed the On Error Continue example. Not both. I'm sure that 'returned payload' was lying in the other success file after being aggregated.
If you truly wanted the isolation of error records, I find On Error Propagate the best candidate here. Because anyways, errors will not disturb rest of the batch job. Error records would be published to VM Queue and not aggregated(and hence not written to success file).
Also better to have Max Failed Records as -1 for the Batch Job Configuration so that Batch Job don't stop in case you more than one batch steps.
@sivathankamanee-channel Please correct me if my understanding is wrong.
Excellent and its very useful.
Hello sir please upload order wise sessions
Great work sir
Very Informative Video!
Another great video. Error handling in batch job is very important.
Thank you !!
Thanks for the video
Can you make a video on jenkins CI/CD for onprem mule stand-alone servers
I saw a video on cloud auto deployments but not on onprem