**Advantages and Disadvantages of RMSE (Root Mean Square Error):** 1. **Advantages:****Advantages and Disadvantages of RMSE (Root Mean Square Error):** 1. **Advantages:** - **Time of Convergence:** RMSE tends to converge faster than Mean Squared Error (MSE). This is because RMSE puts more emphasis on large errors due to the square root operation, which can help the optimization algorithm converge more quickly.
- **Robustness to Outliers:** RMSE is less sensitive to outliers compared to MSE. Outliers have a smaller impact on RMSE because the square root tends to reduce the influence of extreme values. - **Gradient Descent:** RMSE works well with gradient descent optimization algorithms. The square root in RMSE helps in preventing the algorithm from getting stuck in regions with large errors. 2. **Disadvantages:** - **Interpretability:** The square root operation makes RMSE less intuitive to interpret compared to MSE. It might be harder to explain the significance of the error to non-technical stakeholders. - **Sensitivity to Large Errors:** RMSE can be sensitive to large errors, and the model may be penalized heavily for a few significant mistakes. In some cases, this sensitivity may not be desirable. - **Non-Negativity:** RMSE cannot handle negative prediction errors well, as the square root of a negative value is undefined. This can be a limitation in certain applications. **Comparison with MSE (Mean Squared Error):** - **Time of Convergence:** RMSE often converges faster than MSE due to its emphasis on larger errors. - **Robustness to Outliers:** RMSE is more robust to outliers than MSE. Outliers have a reduced impact on RMSE because of the square root. - **Gradient Descent:** Both RMSE and MSE work well with gradient descent, but RMSE may converge faster. - **Shape of Gradient Descent:** The shape of the gradient descent curve for RMSE is smoother compared to MSE. The square root operation in RMSE can help prevent the algorithm from getting stuck in regions with large errors. In simple terms, RMSE is like MSE with a square root, making it converge faster, less sensitive to outliers, and having a smoother gradient descent curve. However, it may be less intuitive and sensitive to large errors. Choose between them based on the specific characteristics of your data and the goals of your model.
this is what chatgtp gave me Sensitivity to Outliers: RMSE is highly sensitive to outliers because it squares the residuals. Large errors (outliers) are amplified, which can dominate the RMSE value and give a misleading impression of the model’s overall performance. Impact of Outliers: A single outlier can drastically increase the RMSE, even if the majority of predictions are accurate. This makes RMSE less reliable for datasets with significant outliers unless steps like outlier removal or robust metrics are employed.
RMSE advantages are: Advantage- 1- Differentiable (Explanation - Because it is same is MSE, only it is sqrt (MSE) 2- It has local min and Global min (Explanation- similar to point 1) 3- Robust to outliers (since it is square root of a square) 4- It has the same unit (since it is square root of a square)
Thank u so much for making us understand these metrics and you are focusing on this channel too... Plz make videos on feature engineering and feature selection for these ml models as it has become a very crucial part to learn.
Advantages of RMSE: RMSE combines the advantages of both MSE and MAE. Like MSE, it penalizes larger errors more, but the square root brings it back to the same unit as the actual values, making it more interpretable. RMSE is often preferred when you want to measure how spread out the errors are, especially when large errors are particularly undesirable.
RMSE: when we have outliers of data and we find mse and we do the rmse which root of mse then the error will also decrease due to the root square. Unit will be same
rmse is work for root in mean squared error then will be error of value can be high range and its after use rmse so will be output is same as a...suppose 15 of error value so mse is 225 then use rmse so output is 15 that's it... so very closely error can be small its every time for used RMSE...
Suppose we have outliers in data and when we implement L.R then it also trying to find best fit line so risk is it also considered outliers data and trying to find best fit line among them so and if we find mse then the error also squared due to the mse so our model can’t predict properly and model accuracy also decrease .. and cost will be high.. major difference /changes in output value .. Rmse : we’re trying reduce those errors due to the outliers if we apply rmse so we’re square root those error and trying to minimize those errors and find best find line among them.. Unit will be same bcz error will be squared root that’s why..i hope you got it👍🏻
That 1/2 is just there to cancel the 2 which will be in numerator after differenctiation of x^2. You can ignore it completely if you want. For even detailed explanation check video no. 2 in this playlist
**Advantages and Disadvantages of RMSE (Root Mean Square Error):**
1. **Advantages:****Advantages and Disadvantages of RMSE (Root Mean Square Error):**
1. **Advantages:**
- **Time of Convergence:** RMSE tends to converge faster than Mean Squared Error (MSE). This is because RMSE puts more emphasis on large errors due to the square root operation, which can help the optimization algorithm converge more quickly.
- **Robustness to Outliers:** RMSE is less sensitive to outliers compared to MSE. Outliers have a smaller impact on RMSE because the square root tends to reduce the influence of extreme values.
- **Gradient Descent:** RMSE works well with gradient descent optimization algorithms. The square root in RMSE helps in preventing the algorithm from getting stuck in regions with large errors.
2. **Disadvantages:**
- **Interpretability:** The square root operation makes RMSE less intuitive to interpret compared to MSE. It might be harder to explain the significance of the error to non-technical stakeholders.
- **Sensitivity to Large Errors:** RMSE can be sensitive to large errors, and the model may be penalized heavily for a few significant mistakes. In some cases, this sensitivity may not be desirable.
- **Non-Negativity:** RMSE cannot handle negative prediction errors well, as the square root of a negative value is undefined. This can be a limitation in certain applications.
**Comparison with MSE (Mean Squared Error):**
- **Time of Convergence:** RMSE often converges faster than MSE due to its emphasis on larger errors.
- **Robustness to Outliers:** RMSE is more robust to outliers than MSE. Outliers have a reduced impact on RMSE because of the square root.
- **Gradient Descent:** Both RMSE and MSE work well with gradient descent, but RMSE may converge faster.
- **Shape of Gradient Descent:** The shape of the gradient descent curve for RMSE is smoother compared to MSE. The square root operation in RMSE can help prevent the algorithm from getting stuck in regions with large errors.
In simple terms, RMSE is like MSE with a square root, making it converge faster, less sensitive to outliers, and having a smoother gradient descent curve. However, it may be less intuitive and sensitive to large errors. Choose between them based on the specific characteristics of your data and the goals of your model.
chat gpt used*
Bilkul sahi bola bhai ** hatana hi bhul gaya 😂@@aryanthombre2971
this is what chatgtp gave me
Sensitivity to Outliers:
RMSE is highly sensitive to outliers because it squares the residuals. Large errors (outliers) are amplified, which can dominate the RMSE value and give a misleading impression of the model’s overall performance.
Impact of Outliers:
A single outlier can drastically increase the RMSE, even if the majority of predictions are accurate. This makes RMSE less reliable for datasets with significant outliers unless steps like outlier removal or robust metrics are employed.
RMSE advantages are:
Advantage-
1- Differentiable (Explanation - Because it is same is MSE, only it is sqrt (MSE)
2- It has local min and Global min (Explanation- similar to point 1)
3- Robust to outliers (since it is square root of a square)
4- It has the same unit (since it is square root of a square)
I'm just saying
Disadvantages:
Convergence take more time?
Sry But It is not Robust to Outliers - Because It already give the square difference before taking the square root
Thank u so much for making us understand these metrics and you are focusing on this channel too...
Plz make videos on feature engineering and feature selection for these ml models as it has become a very crucial part to learn.
bhaiya thank you for this amazing series 🙏🙏🙏🙏
advantages of RMSE (according to me obviously) are :- it would be robust to outliers , there will be parabolic curve , ofc differentiable , same unit
Advantages of RMSE:
RMSE combines the advantages of both MSE and MAE. Like MSE, it penalizes larger errors more, but the square root brings it back to the same unit as the actual values, making it more interpretable.
RMSE is often preferred when you want to measure how spread out the errors are, especially when large errors are particularly undesirable.
RMSE: when we have outliers of data and we find mse and we do the rmse which root of mse then the error will also decrease due to the root square.
Unit will be same
rmse is work for root in mean squared error then will be error of value can be high range and its after use rmse so will be output is same as a...suppose 15 of error value so mse is 225 then use rmse so output is 15 that's it... so very closely error can be small its every time for used RMSE...
in RSME GRAPH is differentiable, has same unit, mostly used in deep learning. disadvantage - not robust to outliers
nice
@1:47 it should be 1/2n in cost function
exactly
Why are the cost function and mean squared errors called the same thing? WHEN THE COST FUNCTION IS 1/2M AND THE MSE IS 1/N. AND M=N.....
Thank you
cant able to download notes from github , can anyone help please
My question is why we are using MSE instead of RMSE in Linear regression?
Suppose we have outliers in data and when we implement L.R then it also trying to find best fit line so risk is it also considered outliers data and trying to find best fit line among them so and if we find mse then the error also squared due to the mse so our model can’t predict properly and model accuracy also decrease .. and cost will be high.. major difference /changes in output value ..
Rmse : we’re trying reduce those errors due to the outliers if we apply rmse so we’re square root those error and trying to minimize those errors and find best find line among them..
Unit will be same bcz error will be squared root that’s why..i hope you got it👍🏻
Sir, I have bought your Data Science Hindi course, but it is not working. Not a single video has come. What's the problem so far?
Please try contacting the support..they will guide you
How to buy data science course
hello sir, for the cost function we used 1/2n now for MSE we are using 1/n. Is it okay?
That 1/2 is just there to cancel the 2 which will be in numerator after differenctiation of x^2. You can ignore it completely if you want. For even detailed explanation check video no. 2 in this playlist
Moore Donna Hernandez Kevin Robinson Michelle
sir left topic create veri problem.