Can you talk about managing the db2diag log file size (DIAGSIZE) in a future. I have seen diag files get to several GB and they are difficult to open in an editor.
Don Bryan in hindsight, most old school DBA’s wouldn’t even think of this. Someone established like me would have a script that runs nightly to rename the db2diag.log to db2diag.date.log and holds 30 days worth of logs before pruning. So your log is never more than 24 hours. I know there are DB CFG parameters for this now as well but I like the format I have and the auto-pruning. This script should be accessible in my GitHub account if you need something similar. (GitHub: mkrafick). But yeah, I have seen the multi-gig log as well. All you need is one crazy dump and your file system is hosed.
I did not know about the PD_GET_DIAG_HIST, Thanks
Could you please me the query which you are using
Can you talk about managing the db2diag log file size (DIAGSIZE) in a future. I have seen diag files get to several GB and they are difficult to open in an editor.
Don Bryan in hindsight, most old school DBA’s wouldn’t even think of this. Someone established like me would have a script that runs nightly to rename the db2diag.log to db2diag.date.log and holds 30 days worth of logs before pruning. So your log is never more than 24 hours. I know there are DB CFG parameters for this now as well but I like the format I have and the auto-pruning. This script should be accessible in my GitHub account if you need something similar. (GitHub: mkrafick). But yeah, I have seen the multi-gig log as well. All you need is one crazy dump and your file system is hosed.