- 84
- 23 000
MEDIOCRE_GUY
Japan
เนเธเนเธฒเธฃเนเธงเธกเนเธกเธทเนเธญ 28 เธ.เธ. 2014
๐ง๐ต๐ฒ ๐ฝ๐๐ฟ๐ฝ๐ผ๐๐ฒ ๐ผ๐ณ ๐๐ต๐ถ๐ ๐ฐ๐ต๐ฎ๐ป๐ป๐ฒ๐น ๐ถ๐ ๐๐ผ ๐บ๐ฎ๐ธ๐ฒ ๐ฐ๐ผ๐ป๐๐ฒ๐ป๐ ๐ฟ๐ฒ๐น๐ฎ๐๐ฒ๐ฑ ๐บ๐ฎ๐ถ๐ป๐น๐ ๐๐ผ ๐ฑ๐ฎ๐๐ฎ ๐๐ฐ๐ถ๐ฒ๐ป๐ฐ๐ฒ.
๐ฐ ๐๐๐๐๐๐๐ ๐๐๐๐๐๐๐๐ ๐ ๐๐๐ ๐๐๐๐๐๐๐ ๐๐๐๐๐ ๐ฐ ๐๐๐ ๐๐ ๐๐๐๐๐๐๐๐๐๐๐ ๐๐ ๐๐๐๐๐๐ ๐ ๐ด๐๐๐๐๐'๐ ๐ ๐๐๐๐๐ ๐๐ ๐ฑ๐๐๐๐. ๐ช๐๐๐๐๐๐๐๐, ๐ฐ ๐๐ ๐๐๐๐๐๐๐๐ ๐ ๐ท๐๐ซ ๐ ๐๐๐๐๐.
๐ ๐จ๐ฉ๐ช๐๐๐๐ ๐๐ก๐๐๐ฉ๐ง๐๐๐๐ก ๐๐ฃ๐ ๐๐ก๐๐๐ฉ๐ง๐ค๐ฃ๐๐ ๐๐ฃ๐๐๐ฃ๐๐๐ง๐๐ฃ๐ (๐๐๐) ๐๐ช๐ง๐๐ฃ๐ ๐ข๐ฎ ๐ช๐ฃ๐๐๐ง๐๐ง๐๐๐ช๐๐ฉ๐ ๐๐๐ช๐๐๐ฉ๐๐ค๐ฃ.
๐๐ง ๐ฆ๐ฒ ๐จ๐ฉ๐ข๐ง๐ข๐จ๐ง, ๐๐๐ญ๐ ๐ฌ๐๐ข๐๐ง๐๐ (๐๐ฌ๐ฉ๐๐๐ข๐๐ฅ๐ฅ๐ฒ ๐๐๐๐ฉ ๐ฅ๐๐๐ซ๐ง๐ข๐ง๐ ) ๐ข๐ฌ ๐ ๐ฏ๐๐ซ๐ฒ ๐ก๐๐ซ๐ ๐ญ๐ก๐ข๐ง๐ ๐ญ๐จ ๐ฎ๐ง๐๐๐ซ๐ฌ๐ญ๐๐ง๐. ๐๐ก๐ ๐ฆ๐๐ญ๐ก๐๐ฆ๐๐ญ๐ข๐๐ฌ ๐ข๐ง๐ฏ๐จ๐ฅ๐ฏ๐๐ ๐ฐ๐ข๐ญ๐ก ๐๐๐ญ๐ ๐ฌ๐๐ข๐๐ง๐๐ ๐ข๐ฌ ๐๐ฑ๐ญ๐ซ๐๐ฆ๐๐ฅ๐ฒ ๐๐จ๐ฆ๐ฉ๐ฅ๐๐ฑ. ๐๐ฎ๐ญ, ๐๐๐ญ๐ ๐ฌ๐๐ข๐๐ง๐๐ ๐๐ง๐ ๐๐ซ๐ญ๐ข๐๐ข๐๐ข๐๐ฅ ๐๐ง๐ญ๐๐ฅ๐ฅ๐ข๐ ๐๐ง๐๐ (๐๐) ๐ฐ๐ข๐ฅ๐ฅ ๐ซ๐ฎ๐ฅ๐ ๐ญ๐ก๐ ๐ฐ๐จ๐ซ๐ฅ๐ ๐ข๐ง ๐ญ๐ก๐ ๐ฒ๐๐๐ซ๐ฌ ๐ญ๐จ ๐๐จ๐ฆ๐.
๐๐ฐ, ๐ธ๐ฆ ๐ฉ๐ข๐ท๐ฆ ๐ต๐ฐ ๐ต๐ณ๐บ ๐ฐ๐ถ๐ณ ๐ฃ๐ฆ๐ด๐ต ๐ต๐ฐ ๐ญ๐ฆ๐ข๐ณ๐ฏ ๐ข๐ด ๐ฎ๐ถ๐ค๐ฉ ๐ข๐ด ๐ฑ๐ฐ๐ด๐ด๐ช๐ฃ๐ญ๐ฆ.
๐ธ ๐ ๐๐๐ ๐๐๐ข ๐๐ ๐๐๐๐๐ ๐๐ข ๐๐๐๐๐๐๐ ๐๐๐๐ ๐๐๐๐๐ ๐๐๐๐๐ ๐๐๐๐ ๐๐๐๐๐๐๐ ๐ ๐๐๐๐๐๐๐ ๐ธ ๐ ๐๐๐ ๐๐ ๐๐๐๐ ๐๐ ๐๐๐๐๐๐ ๐๐๐๐ ๐๐๐๐ (๐๐๐๐๐๐๐๐ข, ๐ธ ๐๐๐๐ ๐๐๐๐ ๐๐๐๐๐๐๐๐๐๐ ๐๐ ๐๐๐๐๐๐ ๐๐๐๐). ๐ธ ๐๐ ๐๐ ๐๐ก๐๐๐๐ ๐๐๐ ๐ ๐๐๐ ๐๐๐ข ๐๐ข ๐๐๐๐ ๐๐ ๐๐๐๐ ๐๐๐ ๐๐๐๐๐๐๐๐ ๐๐ ๐๐๐๐ข ๐๐ ๐๐๐๐๐๐๐๐.
๐ ๐๐จ๐ง'๐ญ ๐ก๐๐ฏ๐ ๐ ๐ฅ๐จ๐ญ ๐จ๐ ๐๐ฑ๐ฉ๐๐ซ๐ญ๐ข๐ฌ๐ ๐ฐ๐ก๐๐ง ๐ข๐ญ ๐๐จ๐ฆ๐๐ฌ ๐ญ๐จ ๐ญ๐ก๐๐จ๐ซ๐ฒ (๐๐๐๐๐ฎ๐ฌ๐ ๐ ๐๐ฆ ๐๐ฏ๐๐ซ๐๐ ๐ ๐ข๐ง ๐ฆ๐๐ญ๐ก๐๐ฆ๐๐ญ๐ข๐๐ฌ) ๐๐ง๐ ๐ญ๐ก๐๐ซ๐๐๐จ๐ซ๐, ๐ ๐ฐ๐ข๐ฅ๐ฅ ๐ฆ๐จ๐ฌ๐ญ๐ฅ๐ฒ ๐ฆ๐๐ค๐ ๐๐จ๐๐ข๐ง๐ ๐ฏ๐ข๐๐๐จ๐ฌ.
๐ฐ ๐๐๐๐๐๐๐ ๐๐๐๐๐๐๐๐ ๐ ๐๐๐ ๐๐๐๐๐๐๐ ๐๐๐๐๐ ๐ฐ ๐๐๐ ๐๐ ๐๐๐๐๐๐๐๐๐๐๐ ๐๐ ๐๐๐๐๐๐ ๐ ๐ด๐๐๐๐๐'๐ ๐ ๐๐๐๐๐ ๐๐ ๐ฑ๐๐๐๐. ๐ช๐๐๐๐๐๐๐๐, ๐ฐ ๐๐ ๐๐๐๐๐๐๐๐ ๐ ๐ท๐๐ซ ๐ ๐๐๐๐๐.
๐ ๐จ๐ฉ๐ช๐๐๐๐ ๐๐ก๐๐๐ฉ๐ง๐๐๐๐ก ๐๐ฃ๐ ๐๐ก๐๐๐ฉ๐ง๐ค๐ฃ๐๐ ๐๐ฃ๐๐๐ฃ๐๐๐ง๐๐ฃ๐ (๐๐๐) ๐๐ช๐ง๐๐ฃ๐ ๐ข๐ฎ ๐ช๐ฃ๐๐๐ง๐๐ง๐๐๐ช๐๐ฉ๐ ๐๐๐ช๐๐๐ฉ๐๐ค๐ฃ.
๐๐ง ๐ฆ๐ฒ ๐จ๐ฉ๐ข๐ง๐ข๐จ๐ง, ๐๐๐ญ๐ ๐ฌ๐๐ข๐๐ง๐๐ (๐๐ฌ๐ฉ๐๐๐ข๐๐ฅ๐ฅ๐ฒ ๐๐๐๐ฉ ๐ฅ๐๐๐ซ๐ง๐ข๐ง๐ ) ๐ข๐ฌ ๐ ๐ฏ๐๐ซ๐ฒ ๐ก๐๐ซ๐ ๐ญ๐ก๐ข๐ง๐ ๐ญ๐จ ๐ฎ๐ง๐๐๐ซ๐ฌ๐ญ๐๐ง๐. ๐๐ก๐ ๐ฆ๐๐ญ๐ก๐๐ฆ๐๐ญ๐ข๐๐ฌ ๐ข๐ง๐ฏ๐จ๐ฅ๐ฏ๐๐ ๐ฐ๐ข๐ญ๐ก ๐๐๐ญ๐ ๐ฌ๐๐ข๐๐ง๐๐ ๐ข๐ฌ ๐๐ฑ๐ญ๐ซ๐๐ฆ๐๐ฅ๐ฒ ๐๐จ๐ฆ๐ฉ๐ฅ๐๐ฑ. ๐๐ฎ๐ญ, ๐๐๐ญ๐ ๐ฌ๐๐ข๐๐ง๐๐ ๐๐ง๐ ๐๐ซ๐ญ๐ข๐๐ข๐๐ข๐๐ฅ ๐๐ง๐ญ๐๐ฅ๐ฅ๐ข๐ ๐๐ง๐๐ (๐๐) ๐ฐ๐ข๐ฅ๐ฅ ๐ซ๐ฎ๐ฅ๐ ๐ญ๐ก๐ ๐ฐ๐จ๐ซ๐ฅ๐ ๐ข๐ง ๐ญ๐ก๐ ๐ฒ๐๐๐ซ๐ฌ ๐ญ๐จ ๐๐จ๐ฆ๐.
๐๐ฐ, ๐ธ๐ฆ ๐ฉ๐ข๐ท๐ฆ ๐ต๐ฐ ๐ต๐ณ๐บ ๐ฐ๐ถ๐ณ ๐ฃ๐ฆ๐ด๐ต ๐ต๐ฐ ๐ญ๐ฆ๐ข๐ณ๐ฏ ๐ข๐ด ๐ฎ๐ถ๐ค๐ฉ ๐ข๐ด ๐ฑ๐ฐ๐ด๐ด๐ช๐ฃ๐ญ๐ฆ.
๐ธ ๐ ๐๐๐ ๐๐๐ข ๐๐ ๐๐๐๐๐ ๐๐ข ๐๐๐๐๐๐๐ ๐๐๐๐ ๐๐๐๐๐ ๐๐๐๐๐ ๐๐๐๐ ๐๐๐๐๐๐๐ ๐ ๐๐๐๐๐๐๐ ๐ธ ๐ ๐๐๐ ๐๐ ๐๐๐๐ ๐๐ ๐๐๐๐๐๐ ๐๐๐๐ ๐๐๐๐ (๐๐๐๐๐๐๐๐ข, ๐ธ ๐๐๐๐ ๐๐๐๐ ๐๐๐๐๐๐๐๐๐๐ ๐๐ ๐๐๐๐๐๐ ๐๐๐๐). ๐ธ ๐๐ ๐๐ ๐๐ก๐๐๐๐ ๐๐๐ ๐ ๐๐๐ ๐๐๐ข ๐๐ข ๐๐๐๐ ๐๐ ๐๐๐๐ ๐๐๐ ๐๐๐๐๐๐๐๐ ๐๐ ๐๐๐๐ข ๐๐ ๐๐๐๐๐๐๐๐.
๐ ๐๐จ๐ง'๐ญ ๐ก๐๐ฏ๐ ๐ ๐ฅ๐จ๐ญ ๐จ๐ ๐๐ฑ๐ฉ๐๐ซ๐ญ๐ข๐ฌ๐ ๐ฐ๐ก๐๐ง ๐ข๐ญ ๐๐จ๐ฆ๐๐ฌ ๐ญ๐จ ๐ญ๐ก๐๐จ๐ซ๐ฒ (๐๐๐๐๐ฎ๐ฌ๐ ๐ ๐๐ฆ ๐๐ฏ๐๐ซ๐๐ ๐ ๐ข๐ง ๐ฆ๐๐ญ๐ก๐๐ฆ๐๐ญ๐ข๐๐ฌ) ๐๐ง๐ ๐ญ๐ก๐๐ซ๐๐๐จ๐ซ๐, ๐ ๐ฐ๐ข๐ฅ๐ฅ ๐ฆ๐จ๐ฌ๐ญ๐ฅ๐ฒ ๐ฆ๐๐ค๐ ๐๐จ๐๐ข๐ง๐ ๐ฏ๐ข๐๐๐จ๐ฌ.
HistGradientBoostingClassifier using Scikit-Learn
๐๐ข๐ฌ๐ญ๐๐ซ๐๐๐ข๐๐ง๐ญ๐๐จ๐จ๐ฌ๐ญ๐ข๐ง๐ ๐๐ฅ๐๐ฌ๐ฌ๐ข๐๐ข๐๐ซ implements gradient boosting using histogram-based techniques. It aggregates feature values into discrete bins (histograms) and processes these bins instead of individual samples. This algorithm is faster and more memory-efficient for large datasets. It can handle datasets with millions of samples due to its binning strategy.
๐ฎ๐๐๐ฏ๐๐ ๐๐ ๐ ๐๐๐๐: github.com/randomaccess2023/MG2023/tree/main/Video%2084
๐๐ข๐ฅ๐ค๐ง๐ฉ๐๐ฃ๐ฉ ๐ฉ๐๐ข๐๐จ๐ฉ๐๐ข๐ฅ๐จ:
01:19 - Import required libraries
03:19 - Load ๐น๐ฒ๐๐๐ฒ๐ฟ_๐ฟ๐ฒ๐ฐ๐ผ๐ด๐ป๐ถ๐๐ถ๐ผ๐ป dataset
06:10 - Perform preprocessing
08:36 - Separate features and classes
09:12 - Split the dataset
10:34 - Apply ๐๐ถ๐๐๐๐ฟ๐ฎ๐ฑ๐ถ๐ฒ๐ป๐๐๐ผ๐ผ๐๐๐ถ๐ป๐ด๐๐น๐ฎ๐๐๐ถ๐ณ๐ถ๐ฒ๐ฟ
13:10 - Plot ๐ฐ๐ผ๐ป๐ณ๐๐๐ถ๐ผ๐ป_๐บ๐ฎ๐๐ฟ๐ถ๐
20:16 - Print ๐ฐ๐น๐ฎ๐๐๐ถ๐ณ๐ถ๐ฐ๐ฎ๐๐ถ๐ผ๐ป_๐ฟ๐ฒ๐ฝ๐ผ๐ฟ๐
#machinelearning #histgradientboostingclassifier #supervisedlearning #supervisedclassification #datascience #python #pythonprogramming #jupyternotebook #letterrecognitiondataset
๐ฎ๐๐๐ฏ๐๐ ๐๐ ๐ ๐๐๐๐: github.com/randomaccess2023/MG2023/tree/main/Video%2084
๐๐ข๐ฅ๐ค๐ง๐ฉ๐๐ฃ๐ฉ ๐ฉ๐๐ข๐๐จ๐ฉ๐๐ข๐ฅ๐จ:
01:19 - Import required libraries
03:19 - Load ๐น๐ฒ๐๐๐ฒ๐ฟ_๐ฟ๐ฒ๐ฐ๐ผ๐ด๐ป๐ถ๐๐ถ๐ผ๐ป dataset
06:10 - Perform preprocessing
08:36 - Separate features and classes
09:12 - Split the dataset
10:34 - Apply ๐๐ถ๐๐๐๐ฟ๐ฎ๐ฑ๐ถ๐ฒ๐ป๐๐๐ผ๐ผ๐๐๐ถ๐ป๐ด๐๐น๐ฎ๐๐๐ถ๐ณ๐ถ๐ฒ๐ฟ
13:10 - Plot ๐ฐ๐ผ๐ป๐ณ๐๐๐ถ๐ผ๐ป_๐บ๐ฎ๐๐ฟ๐ถ๐
20:16 - Print ๐ฐ๐น๐ฎ๐๐๐ถ๐ณ๐ถ๐ฐ๐ฎ๐๐ถ๐ผ๐ป_๐ฟ๐ฒ๐ฝ๐ผ๐ฟ๐
#machinelearning #histgradientboostingclassifier #supervisedlearning #supervisedclassification #datascience #python #pythonprogramming #jupyternotebook #letterrecognitiondataset
เธกเธธเธกเธกเธญเธ: 27
เธงเธตเธเธตเนเธญ
Calculate Inception Score (IS) using PyTorch
เธกเธธเธกเธกเธญเธ 49เธซเธฅเธฒเธขเนเธเธทเธญเธเธเนเธญเธ
In this video, I tried to explain how ๐๐ง๐๐๐ฉ๐ญ๐ข๐จ๐ง ๐๐๐จ๐ซ๐ (๐๐) can be calculated using PyTorch. Inception score is a metric that is often used for evaluating the quality of synthetic images provided by generative models. Inception score estimates the quality of a collection of synthetic images based on how well the pretrained ๐๐ฃ๐๐๐ฅ๐ฉ๐๐ค๐ฃ๐3 model classifies them as one of 1000 known objects. Inception...
RandomizedSearchCV using Scikit-Learn
เธกเธธเธกเธกเธญเธ 53เธซเธฅเธฒเธขเนเธเธทเธญเธเธเนเธญเธ
๐๐๐ง๐๐จ๐ฆ๐ข๐ณ๐๐๐๐๐๐ซ๐๐ก๐๐ is a hyperparameter tuning technique that comes with the Scikit-Learn library. It explores a predefined search space of hyperparameters and randomly selects a few combinations to evaluate model performance. Unlike GridSearchCV which systematically examines all the possible combinations, RandomizedSearchCV selects a fixed number of combinations randomly. If the hyperparameter ...
GridSearchCV using Scikit-Learn
เธกเธธเธกเธกเธญเธ 1142 เธซเธฅเธฒเธขเนเธเธทเธญเธเธเนเธญเธ
๐๐ซ๐ข๐๐๐๐๐ซ๐๐ก๐๐ is a function that comes with Scikit-Learn library and it is a process for tuning hyperparameters in machine learning models. The performance of a machine learning model significantly depends on the selection of hyperparameters. ๐๐ซ๐ข๐๐๐๐๐ซ๐๐ก๐๐ loops through a predefined set of hyperparameters and selects the optimal values from them after exhaustively considering all parameter combin...
K-fold cross validation using Scikit-Learn
เธกเธธเธกเธกเธญเธ 1142 เธซเธฅเธฒเธขเนเธเธทเธญเธเธเนเธญเธ
๐-๐๐จ๐ฅ๐ ๐๐ซ๐จ๐ฌ๐ฌ ๐ฏ๐๐ฅ๐ข๐๐๐ญ๐ข๐จ๐ง is a technique used for evaluating the performance of machine learning models. It uses different portions of the dataset as train and test sets in multiple iterations and helps a model to generalize well on unseen data. Scikit-Learn's ๐ญ๐ซ๐๐ข๐ง_๐ญ๐๐ฌ๐ญ_๐ฌ๐ฉ๐ฅ๐ข๐ญ method uses a fixed set of samples as the train set and the rest of the samples outside the train set as the test set, wh...
GradientBoostingClassifier using Scikit-Learn
เธกเธธเธกเธกเธญเธ 1162 เธซเธฅเธฒเธขเนเธเธทเธญเธเธเนเธญเธ
๐๐ซ๐๐๐ข๐๐ง๐ญ๐๐จ๐จ๐ฌ๐ญ๐ข๐ง๐ ๐๐ฅ๐๐ฌ๐ฌ๐ข๐๐ข๐๐ซ is a supervised machine learning algorithm. It builds an additive model in a forward stage-wise fashion and allows for the optimization of arbitrary differentiable loss functions. ๐๐ถ๐๐๐๐ฏ ๐ฎ๐ฑ๐ฑ๐ฟ๐ฒ๐๐: github.com/randomaccess2023/MG2023/tree/main/Video 79 ๐๐บ๐ฝ๐ผ๐ฟ๐๐ฎ๐ป๐ ๐๐ถ๐บ๐ฒ๐๐๐ฎ๐บ๐ฝ๐: 00:47 - Import required libraries 02:24 - Load ๐๐ซ๐๐๐ข๐ญ_๐๐ฉ๐ฉ๐ซ๐จ๐ฏ๐๐ฅ dataset 04:38 - Perform preprocessi...
ExtraTreesClassifier using Scikit-Learn
เธกเธธเธกเธกเธญเธ 862 เธซเธฅเธฒเธขเนเธเธทเธญเธเธเนเธญเธ
๐๐ฑ๐ญ๐ซ๐๐๐ซ๐๐๐ฌ๐๐ฅ๐๐ฌ๐ฌ๐ข๐๐ข๐๐ซ is a supervised machine learning algorithm. It is a type of ensemble learning technique which fits a number of randomized decision trees (i.e., extra trees) on various sub-samples of the dataset. It contributes to reducing the variance of the model and results in less overfitting. ๐๐ถ๐๐๐๐ฏ ๐ฎ๐ฑ๐ฑ๐ฟ๐ฒ๐๐: github.com/randomaccess2023/MG2023/tree/main/Video 78 ๐๐บ๐ฝ๐ผ๐ฟ๐๐ฎ๐ป๐ ๐๐ถ๐บ๐ฒ๐๐๐ฎ๐บ๐ฝ๐: 01...
Quadratic Discriminant Analysis (QDA) using Scikit-Learn
เธกเธธเธกเธกเธญเธ 792 เธซเธฅเธฒเธขเนเธเธทเธญเธเธเนเธญเธ
๐๐ฎ๐๐๐ซ๐๐ญ๐ข๐ ๐๐ข๐ฌ๐๐ซ๐ข๐ฆ๐ข๐ง๐๐ง๐ญ ๐๐ง๐๐ฅ๐ฒ๐ฌ๐ข๐ฌ (๐๐๐) is a supervised machine learning algorithm. It is very similar to Linear Discriminant Analysis (LDA) except the assumption that the classes share the same covariance matrix. In other words, each class has its own covariance matrix. In this case, the boundary between classes is a quadratic surface instead of a hyperplane. ๐๐ถ๐๐๐๐ฏ ๐ฎ๐ฑ๐ฑ๐ฟ๐ฒ๐๐: github.com/randomacc...
CatBoost Classifier | Machine Learning | Python
เธกเธธเธกเธกเธญเธ 2213 เธซเธฅเธฒเธขเนเธเธทเธญเธเธเนเธญเธ
Categorical Boosting (๐๐๐ญ๐๐จ๐จ๐ฌ๐ญ) is a gradient-boosting algorithm for machine learning. Gradient boosting is a process in which many decision trees are constructed iteratively. In CatBoost, each successive tree is built with reduced loss compared to the previous trees. I used ๐ฆ๐ฎ๐ฌ๐ก๐ซ๐จ๐จ๐ฆ๐ฌ.๐๐ฌ๐ฏ dataset for this example. The dataset is available in the repository. It contains 2 types of mushrooms in t...
Bagging Classifier using Scikit-Learn
เธกเธธเธกเธกเธญเธ 423 เธซเธฅเธฒเธขเนเธเธทเธญเธเธเนเธญเธ
๐๐๐ ๐ ๐ข๐ง๐ is a supervised machine learning algorithm. It is an ensemble learning technique in which multiple base estimators are trained independently and in parallel on different subsets of the training data. The final prediction is made by aggregating all the predictions of the base estimators. I used ๐ด๐น๐ฎ๐๐_๐ถ๐ฑ๐ฒ๐ป๐๐ถ๐ณ๐ถ๐ฐ๐ฎ๐๐ถ๐ผ๐ป.๐ฐ๐๐ dataset in this example. The dataset is available in the repository. ...
Artificial neural network for regression task using PyTorch
เธกเธธเธกเธกเธญเธ 1203 เธซเธฅเธฒเธขเนเธเธทเธญเธเธเนเธญเธ
A regression analysis in machine learning is used to investigate the relationship between one or more independent variables (treated as ๐ง๐ฆ๐ข๐ต๐ถ๐ณ๐ฆ๐ด) and a dependent variable (regarded as ๐ฐ๐ถ๐ต๐ค๐ฐ๐ฎ๐ฆ). It is a method for predictive modelling and is used to predict a continuous outcome. I used ๐ด๐ฌ๐ญ๐ฆ๐ข๐ณ๐ฏ'๐ด ๐๐๐ฅ๐ข๐๐จ๐ซ๐ง๐ข๐ ๐ก๐จ๐ฎ๐ฌ๐ข๐ง๐ dataset for this example. This dataset has 8 features and I built a very simple ar...
Hartigan index using Python
เธกเธธเธกเธกเธญเธ 133 เธซเธฅเธฒเธขเนเธเธทเธญเธเธเนเธญเธ
๐๐๐ซ๐ญ๐ข๐ ๐๐ง ๐ข๐ง๐๐๐ฑ (๐๐) is computed by taking the logarithm of the ratio among the sum-of-squares between each cluster (๐๐๐) and the sum-of-squares within the clusters (๐๐๐). It is a cluster evaluation technique. ๐๐ถ๐๐๐๐ฏ ๐ฎ๐ฑ๐ฑ๐ฟ๐ฒ๐๐: github.com/randomaccess2023/MG2023/tree/main/Video 73 ๐๐ข๐ฅ๐ค๐ง๐ฉ๐๐ฃ๐ฉ ๐ฉ๐๐ข๐๐จ๐ฉ๐๐ข๐ฅ๐จ: 00:57 - Import required libraries 04:03 - Create data 05:07 - Perform preprocessing 05:19 - Perf...
Linear Discriminant Analysis using Scikit-Learn
เธกเธธเธกเธกเธญเธ 783 เธซเธฅเธฒเธขเนเธเธทเธญเธเธเนเธญเธ
๐๐ข๐ง๐๐๐ซ ๐๐ข๐ฌ๐๐ซ๐ข๐ฆ๐ข๐ง๐๐ง๐ญ ๐๐ง๐๐ฅ๐ฒ๐ฌ๐ข๐ฌ (๐๐๐) is a supervised machine learning algorithm. This approach is used in machine learning to solve classification problems with two or more classes. ๐๐๐ fits a Gaussian density to each class, assuming all classes share the same covariance matrix. I used ๐ฟ๐ฎ๐ถ๐๐ถ๐ป.๐
๐น๐๐
dataset for this example. The dataset is available in the repository. It contains 2 types of raisins...
XGBoost Classifier | Machine Learning | Python API
เธกเธธเธกเธกเธญเธ 574 เธซเธฅเธฒเธขเนเธเธทเธญเธเธเนเธญเธ
eXtreme Gradient Boosting (๐๐๐๐จ๐จ๐ฌ๐ญ) is a gradient-boosting algorithm for machine learning. ๐๐๐๐จ๐จ๐ฌ๐ญ builds a predictive model by combining the predictions of multiple individual models, often decision trees, in an iterative manner. I used ๐ฏ๐ฎ๐ป๐ธ๐ป๐ผ๐๐ฒ_๐ฎ๐๐๐ต๐ฒ๐ป๐๐ถ๐ฐ๐ฎ๐๐ถ๐ผ๐ป.๐ฐ๐๐ dataset for this example. The dataset is available in the repository. It contains 2 types of entities in the target column: ๐ฌ & ๐ญ. ...
LightGBM Classifier | Machine Learning | Python API
เธกเธธเธกเธกเธญเธ 1054 เธซเธฅเธฒเธขเนเธเธทเธญเธเธเนเธญเธ
Light Gradient-Boosting Machine (๐๐ข๐ ๐ก๐ญ๐๐๐) is a gradient-boosting algorithm for machine learning. It uses a histogram-based method in which data is bucketed into bins using a histogram of the distribution. I used ๐บ๐ฎ๐ด๐ถ๐ฐ_๐ด๐ฎ๐บ๐บ๐ฎ_๐๐ฒ๐น๐ฒ๐๐ฐ๐ผ๐ฝ๐ฒ.๐ฐ๐๐ dataset for this example. The dataset is available in the repository. It contains 2 types of entities in the target column: ๐ด=๐ด๐ฎ๐บ๐บ๐ฎ(๐๐ถ๐ด๐ป๐ฎ๐น) & ๐ต=๐ต๐ฎ๐ฑ๐ฟ๐ผ๐ป(๐ฏ๐ฎ๐ฐ๐ธ๐ด๐ฟ๐ผ...
AdaBoost Classifier using Scikit-Learn
เธกเธธเธกเธกเธญเธ 5254 เธซเธฅเธฒเธขเนเธเธทเธญเธเธเนเธญเธ
AdaBoost Classifier using Scikit-Learn
Logistic Regression using Scikit-Learn
เธกเธธเธกเธกเธญเธ 3424 เธซเธฅเธฒเธขเนเธเธทเธญเธเธเนเธญเธ
Logistic Regression using Scikit-Learn
Complement Naive Bayes using Scikit-Learn
เธกเธธเธกเธกเธญเธ 434 เธซเธฅเธฒเธขเนเธเธทเธญเธเธเนเธญเธ
Complement Naive Bayes using Scikit-Learn
Gaussian Naive Bayes using Scikit-Learn
เธกเธธเธกเธกเธญเธ 514 เธซเธฅเธฒเธขเนเธเธทเธญเธเธเนเธญเธ
Gaussian Naive Bayes using Scikit-Learn
Bernoulli Naive Bayes using Scikit-Learn
เธกเธธเธกเธกเธญเธ 554 เธซเธฅเธฒเธขเนเธเธทเธญเธเธเนเธญเธ
Bernoulli Naive Bayes using Scikit-Learn
Feature to image representation using Matplotlib
เธกเธธเธกเธกเธญเธ 104 เธซเธฅเธฒเธขเนเธเธทเธญเธเธเนเธญเธ
Feature to image representation using Matplotlib
Multinomial Naive Bayes using Scikit-Learn
เธกเธธเธกเธกเธญเธ 674 เธซเธฅเธฒเธขเนเธเธทเธญเธเธเนเธญเธ
Multinomial Naive Bayes using Scikit-Learn
Categorical Naive Bayes using Scikit-Learn
เธกเธธเธกเธกเธญเธ 1054 เธซเธฅเธฒเธขเนเธเธทเธญเธเธเนเธญเธ
Categorical Naive Bayes using Scikit-Learn
Random Forest using Scikit-Learn
เธกเธธเธกเธกเธญเธ 1854 เธซเธฅเธฒเธขเนเธเธทเธญเธเธเนเธญเธ
Random Forest using Scikit-Learn
Decision Tree using Scikit-Learn
เธกเธธเธกเธกเธญเธ 705 เธซเธฅเธฒเธขเนเธเธทเธญเธเธเนเธญเธ
Decision Tree using Scikit-Learn
Support Vector Machine (SVM) using Scikit-Learn
เธกเธธเธกเธกเธญเธ 1355 เธซเธฅเธฒเธขเนเธเธทเธญเธเธเนเธญเธ
Support Vector Machine (SVM) using Scikit-Learn
Train a CNN with data augmentation - Example using Flowers102 dataset
เธกเธธเธกเธกเธญเธ 2015 เธซเธฅเธฒเธขเนเธเธทเธญเธเธเนเธญเธ
Train a CNN with data augmentation - Example using Flowers102 dataset
K-Nearest Neighbors using Scikit-Learn
เธกเธธเธกเธกเธญเธ 2675 เธซเธฅเธฒเธขเนเธเธทเธญเธเธเนเธญเธ
K-Nearest Neighbors using Scikit-Learn
Inset plotting using Matplotlib
เธกเธธเธกเธกเธญเธ 686 เธซเธฅเธฒเธขเนเธเธทเธญเธเธเนเธญเธ
Inset plotting using Matplotlib
Calculate the output shape of convolution, deconvolution and pooling layers in CNN
เธกเธธเธกเธกเธญเธ 2086 เธซเธฅเธฒเธขเนเธเธทเธญเธเธเนเธญเธ
Calculate the output shape of convolution, deconvolution and pooling layers in CNN
Shame on you, your voice drives me crazy. speak louder!!!!!!!
Thank you so much for this amazing video! I need some advice: I have a SafePal wallet with USDT, and I have the seed phrase. (alarm fetch churn bridge exercise tape speak race clerk couch crater letter). Could you explain how to move them to Binance?
Thanku! After facing a lot of incompatibility error, who would've thought that it will all be solved by a utube videos ๐
what is the difference between DDPM vs cDDPM?
In Conditional DDPM, the output can be controlled. We can provide a particular label of an image, and the model will generate that exact same image. But, if we don't use conditioning, we can't control the output. Model randomly gives one from the entire dataset.
Well. Your mic is definitely mediocre.
@@freenrg888 Agreed
00:00 Sorry, I made a mistake in the title (๐๐ฎ๐๐๐ผ๐ผ๐๐ ๐๐น๐ฎ๐๐๐ถ๐ณ๐ถ๐ฒ๐ฟ using Scikit-Learn) of this video in the Jupyter Notebook. Scikit-Learn doesn't have this algorithm. Sorry for the mistake.
Work on your audio
@@boleto7467 If there is a proper audacity video available on youtube for dealing with keyboard issue, provide the link here. I didn't find one to take care of the keyboard sound.
Hi there! Great job, man. Your tutorials are amazing. Please keep going and upload more tutorials. If you don't mind, I have a suggestion for you. I believe it would be more beneficial if you explained each section or line while coding. For example, clarify the purpose of each section or each line. One more thing, your voice is not clear. The typing noise is louder than your voice. Thank you so much.
@@majidmohammadhosseinzadeh9542 Really poor sound and video editing skills, unfortunately. But, thanks for watching.
You can't imagine how useful your tutorials are. If possible, could you please prepare a tutorial on ANN regression? What I really appreciate about your work is that you provide the complete code and procedure from start to finish, which is quite unique. I've been searching for a tutorial like this for a while but haven't been able to find one. It would be fantastic if you could create an ANN regression model tutorial.
@@majidmohammadhosseinzadeh9542 Okay, hopefully.
What if we have Tabular 1D data. Can you please guide how can we implement conditional DDPM on 1D data. Thanks
@@syedmuzammilahmed6872 I am only familiar with implementations using 2D data. Check out this repository: github.com/yandex-research/tab-ddpm
@@MediocreGuy2023 Actually i have implemented the DDPM on 1D data but now want to apply condition to it. So searching for that Conditioning in 1D DDPM.
@@syedmuzammilahmed6872 Doesn't your 1D data have labels?
@@MediocreGuy2023 It has labels (yes/no)
@@syedmuzammilahmed6872 Doesn't nn.Embedding work the same way it's working for MNIST? You have to change embedding dimensions I guess based on your dataset requirements.
yo bro that video is awesome. can you make more videos like this!
Do you know of any way to do this for coco annotated data?
@@patrickcraig4608 I have never worked with COCO dataset but, I found out from the Internet that this dataset is used for object detection and image segmentation tasks. Image segmentation is different from cropping small patches from a large image.
ๆบๅฅฝ๏ผๅฆๆ่ฎฒ่งฃ็ๅฃฐ้ณๅคงไธ็นๅฐฑๆดๅฅฝไบใ
@@SixuXiao-j9l Thanks for your comment. I do not have expertise on video and sound editing. Most of my videos have sound issues because I don't know how to effectively remove noise from them. To minimize noise, I had to decrease the sound volume too much.
@@MediocreGuy2023 Remove noise with DDPM!
good job
how cani use input image size 224 pixel help
@@BELLAFaiza-p5z The current code resizes the input to 32x32 pixels. So, 224x224 pixels will also get reduced to that size if you use this code. See the transforms.Compose() section.
@@MediocreGuy2023 the code work very good with me but i want use it to generate images with size 224*224 pixel is this possible
@@BELLAFaiza-p5z What is the input image size of your dataset?
@@MediocreGuy2023 is 224*224 pixel and its a medical dataset i want generat a new dataset using this code but for the same size can you help me plz
How to save the images in another folder after getting the patches
plt.savefig(f'output_dir/{i+1}.jpg', dpi=300, bbox_inches='tight', pad_inches=0); Put this line within for loop. Here, output_dir should be the folder where your images will be saved based on i (number of images).
@@MediocreGuy2023 I need one by one patch that is being generated already am able to save the whole collected patches image as one image but i want it separately not joined. I have inserted the line after ax1[R1, C1].axis('off')
I need to save each image one by one separately if you can show
@@barshneyatalukdar1492 Not like that. You need extra lines of code using a for loop.
@@barshneyatalukdar1492 github.com/randomaccess2023/MG2023/issues/1
Thank you for sharing i really aprecciate it, i would try to train the model using a 2D latent space, do you think this architecture will also work for CelebA dataset?
I don't think this structure is good enough for Celeb A as they have a much bigger resolution. Even if you resize them, I think a few additional layers are required.
36:13 In the scaling term, I accidentally wrote "beta_t" instead of "beta_t_square". I corrected it in the slide. Check out the GitHub address.
Can a PyTorch identify handwritten numbers from 0-99, the dataset is spliced into 0-99 using mnist
In that case, there are 100 classes.
Can you help me design a CNN model? I already have a data set
CNN video is available on the channel. Take a look. I will be very busy in the next 2 weeks.
@@MediocreGuy2023
Do you know how to concatenate datasets
Do you mean concatenating images?
@@MediocreGuy2023 yes yes
Can I add you as a friend? I come from China and am a beginner. I would like to ask you some questions,โ@@MediocreGuy2023
@@ๅ้ฉde่่ In PyTorch, "torch.cat" function is available and in the case of NumPy, it is "numpy.concatenate".
@@MediocreGuy2023 nonono l have more questions
You mentioned around 16:50 that you weren't sure why train loss was much higher than test loss. The reason is because of the L1 term. The returned loss value for train includes the L1 term. The returned loss value test does not. If you want comparable values between train and test, you need to either include the L1 term in the test batch function, or you need to only return the classification loss from the train batch function. Otherwise, a good explanation!
Thanks for the explanation.
Thank you very much for all your time into these lessons. I have found it more helpful than lectures by MIT professors.
LOL. Are you serious?
This help me, and your git hub code, Thanks
thank you! this was very helpful!
HI again, I want to kindly ask if you could consider doing a video about 1)the selection of clusters by computing the eigengap scores and plotting them as an eigengap plot.2) the use of normalized mutual information (NMI) score and Rand index to quantify the overlap between discovered and ground truth clusters. Thanks in advance.
I will try.
Does this link help you? github.com/ciortanmadalina/high_noise_clustering/blob/master/spectral_clustering.ipynb
Thank you! This is the best patchify example I've found.
I'm grateful for your video, and I'm presently exploring spectral clustering for data analysis for my Ph.D. dissertation in agriculture. Given my limited experience in this area, I'm curious if you could kindly consider sharing the scripts employed in your video and to share more videos about how to identify cluster sizes and how to validate them and how to do Character analysis of identified clusters too . thanks in advance .
I have a nightmare schedule till November 2. But, I will try to provide the script for this video either today or tomorrow hopefully.
github.com/randomaccess2023/MG2023/tree/main/Video%2037
@@MediocreGuy2023 I wish you all the best for your studies and thank you so much for sharing this.
This is a good channel and clearly explined. Can I get the code in the Github Repository?
I appreciate your comment. In my lab, we have distributed servers (you can notice the name JupyterHub). For this reason, I haven't used GitHub to store the code. But, I can upload just this code tonight since you asked for it hopefully within the next 6-7 hours. I will mention you when it is available.
github.com/randomaccess2023/MG2023/tree/main/Video%2023 You can find the code here.
@@MediocreGuy2023, Thank you very much. HonestlyThis is truly helpful. I will watch all the videos you have uploaded. Thanks again.
At 06:19, I performed preprocessing but forgot to use the scaled features later. I have corrected this mistake in the code that I shared on GitHub. Check that out.
Within 10:11 - 10:43, I scaled the features but eventually forgot to use them later. It's better not to scale the features for this example. It seems unscaled features work better in calculating AIC.
Thank you
Thanks a lot. Looking forward to more videos.
BTW good content related to Data Science
how can I get this jupyter file.
github.com/randomaccess2023/MG2023/tree/main/Video%205
26:05 ---> I made a mistake here; Train loss: {train_per_epoch_loss} should be the correct line but I wrongly wrote Train loss: {test_per_epoch_loss}. Remember to correct this portion.