❤️ Blog post with code for this video: medium.com/@AmyGrabNGoInfo/autoencoder-for-anomaly-detection-using-tensorflow-keras-7fdfa9f3ad99 📒 Code Notebook: mailchi.mp/0533d92d0b6e/p8t1t9jgqc 🙏 Give me a tip to show your appreciation: www.paypal.com/donate/?hosted_button_id=4PZAFYA8GU8JW ✅ Join Medium Membership: If you are not a Medium member and would like to support me to keep creating free content (😄 Buy me a cup of coffee ☕), join Medium membership through this link: medium.com/@AmyGrabNGoInfo/membership You will get full access to posts on Medium for $5 per month, and I will receive a portion of it. Thank you for your support! 🩺 Imbalanced Model & Anomaly Detection Playlist th-cam.com/play/PLVppujud2yJo0qnXjWVAa8h7fxbFJHtfJ.html 🛎️ SUBSCRIBE bit.ly/3keifBY 🔥 Check out more machine learning tutorials on my website! grabngoinfo.com/tutorials/
This tutorial is wrong. a 0.01 recall and precision are terrible result. The training part of the model should only be trained on normal data, not anomalous data.
The training dataset only included the normal data in the tutorial. The code is in Step 5 (# Keep only the normal data for the training dataset X_train_normal = X_train[np.where(y_train == 0)]) It is important to consider the context when interpreting metrics. Firstly, it should be noted that the tutorial utilized a synthetic dataset with a high degree of overlap between normal and outlier data (class_sep=0.5), making it challenging to distinguish between the two categories. Secondly, as unsupervised models were used in the tutorial, there were no available labels for the data. Generally, supervised models perform better than unsupervised models. Thirdly, it is worth noting that the choice of threshold can significantly impact the metrics. Selecting a different threshold can easily improve the recall value.
❤️ Blog post with code for this video:
medium.com/@AmyGrabNGoInfo/autoencoder-for-anomaly-detection-using-tensorflow-keras-7fdfa9f3ad99
📒 Code Notebook: mailchi.mp/0533d92d0b6e/p8t1t9jgqc
🙏 Give me a tip to show your appreciation: www.paypal.com/donate/?hosted_button_id=4PZAFYA8GU8JW
✅ Join Medium Membership: If you are not a Medium member and would like to support me to keep creating free content (😄 Buy me a cup of coffee ☕), join Medium membership through this link: medium.com/@AmyGrabNGoInfo/membership
You will get full access to posts on Medium for $5 per month, and I will receive a portion of it. Thank you for your support!
🩺 Imbalanced Model & Anomaly Detection Playlist th-cam.com/play/PLVppujud2yJo0qnXjWVAa8h7fxbFJHtfJ.html
🛎️ SUBSCRIBE bit.ly/3keifBY
🔥 Check out more machine learning tutorials on my website!
grabngoinfo.com/tutorials/
the code link don't work
Is this an IDS?
This tutorial is wrong. a 0.01 recall and precision are terrible result. The training part of the model should only be trained on normal data, not anomalous data.
The training dataset only included the normal data in the tutorial. The code is in Step 5 (# Keep only the normal data for the training dataset
X_train_normal = X_train[np.where(y_train == 0)])
It is important to consider the context when interpreting metrics. Firstly, it should be noted that the tutorial utilized a synthetic dataset with a high degree of overlap between normal and outlier data (class_sep=0.5), making it challenging to distinguish between the two categories. Secondly, as unsupervised models were used in the tutorial, there were no available labels for the data. Generally, supervised models perform better than unsupervised models. Thirdly, it is worth noting that the choice of threshold can significantly impact the metrics. Selecting a different threshold can easily improve the recall value.
@@grabngoinfo my apology, i missed that line in the code.