How to collect network logs by heavy forwaders. 2.What are the steps to configure syslog server by using heavy forwarder,and how to forward network logs
Hello sir,I have the error when I added the search peers [Encountered the following error while trying to save: Error while sending public key to search peer: Connect Timeout] =>On the web [Error while sending public key to search peer: Connect Timeout] =>On the command Which step did I go wrong Thanks~
I am using AWS When I created distributed peers-Search peer ip-172-31-34-22.us-east-2.compute.internal has the following message: Now skipping indexing of internal audit events, because the downstream queue is not accepting data. Will keep dropping events until data flow resumes. Review system health: ensure downstream indexing and/or forwarding are operating correctly.
Question- I hope this will not be required once we integrate shc with indexer cluster as explained in your future videos.
Yes correct.
Perfect. Please make a tutorial for syslog-ng with universal or heavy forwarder or HEC
Hi sir can it be that in peers since ssl config is not setup to allow https Uris that is why its not taking with schema ?
Good One, Thank you.
How to collect network logs by heavy forwaders.
2.What are the steps to configure syslog server by using heavy forwarder,and how to forward network logs
Please have a look at the below link,
answers.splunk.com/answers/252547/how-to-configure-logging-from-network-devices-fire.html
so helpful, thanks U
Good video
Hello sir,I have the error when I added the search peers
[Encountered the following error while trying to save: Error while sending public key to search peer: Connect Timeout] =>On the web
[Error while sending public key to search peer: Connect Timeout] =>On the command
Which step did I go wrong
Thanks~
Can you check the firewall settings. Looks like Indexer is not reachable.
I am using AWS When I created distributed peers-Search peer ip-172-31-34-22.us-east-2.compute.internal has the following message: Now skipping indexing of internal audit events, because the downstream queue is not accepting data. Will keep dropping events until data flow resumes. Review system health: ensure downstream indexing and/or forwarding are operating correctly.
Please help me. How you have created three instances acting as a Indexers and One intsance acting as a Search Head?
you can refer the below video for splunk installation in GCP ubuntu instance.
th-cam.com/video/dt4gR5AcMo0/w-d-xo.html
@@splunk_ml Thank you very much for your kind reply.
Any strong reason for using distributed search
Bro, Can you do a video on Correlation rules !!!
can we have just one indexer and one search head
Off course you can have. That is basic of distributed search.
@@splunk_ml Thanks buddy. Your videos are very nice .. You are very patient too. Keep it up friend.
If cluster master failed.then how to troubleshoot.
Please have a look at the below link,
docs.splunk.com/Documentation/Splunk/7.3.0/Indexer/Whathappenswhenamasternodegoesdown
https will work when ssl is on
It's not related to ssl... Even http didn't work... Give it a try