A sigmoid function is a mathematical function that maps any input value to a output value between 0 and 1. This is commonly used in machine learning and neural networks to represent probabilities or the likelihood of an event. While a sigmoid function can be used to normalize data or to represent a percentage of the input, it is not equivalent to these concepts. Normalization is a preprocessing step that is used to transform the values of a dataset to a common range, such as 0 to 1 or -1 to 1. This is typically done to make the data more suitable for use in machine learning algorithms. A sigmoid function, on the other hand, is a mathematical function that is used to model the relationship between the inputs and outputs of a system. It is commonly used in neural networks as an activation function, which determines the output of a neuron based on the inputs it receives. In summary, a sigmoid function is a mathematical function that can be used to model probabilities or likelihoods, but it is not the same as normalization or representing a percentage of the input.
On that particular slide, the output is not fully written, Check the next slide where the sigmoid code is implemented, the output for sigmoid(-10) is 4.539............e^-05, which means it lies between 0 and 1.
For Image classification problems. Just create a folder with a name of "dataset" and then inside that folder create seperate folders for each class. For example- "class pen" and other class "pencil". Now download images from internet of pen and pencils and paste them in their corresponding folders.
Your teaching is so good ! Other tutorials from other channels are not as clear as yours ! Thank you 🙏🏻🙏🏻🙏🏻
Happy to hear that!
Mam I watched so many deep learning courses but your teaching is amazing from all of them.
@@EngineeringSprits Glad my videos helped you 🙂
so good....very nice explanation
Thanks!
Great video. Loved the explanation.
Glad to hear it!
Great work mam.
Thanks a lot
in unit / binary step activation slide there is a error represented as the threshold 0>x and xx and x>=0
Thank you for sharing.
thank you for providing such in information but i have one note in slide no 3 the value of x in both equation is less than the threshold x>0 and 0
I think thats an error in the slide.
yes, mistake from her side
|
informative! thanks
Glad it was helpful!
softmax i am understand thank mam
Glad to hear that!
In slide 4 the function table is incorrect for -(ve) value. Overall Very good teaching video.
Thankyou for letting me know about that typo. And glad you liked the video.
Is sigmoid (output of 0-1) as an analogue output, is this not equivelant to normalising data or a percentage of the input
A sigmoid function is a mathematical function that maps any input value to a output value between 0 and 1. This is commonly used in machine learning and neural networks to represent probabilities or the likelihood of an event.
While a sigmoid function can be used to normalize data or to represent a percentage of the input, it is not equivalent to these concepts. Normalization is a preprocessing step that is used to transform the values of a dataset to a common range, such as 0 to 1 or -1 to 1. This is typically done to make the data more suitable for use in machine learning algorithms.
A sigmoid function, on the other hand, is a mathematical function that is used to model the relationship between the inputs and outputs of a system. It is commonly used in neural networks as an activation function, which determines the output of a neuron based on the inputs it receives.
In summary, a sigmoid function is a mathematical function that can be used to model probabilities or likelihoods, but it is not the same as normalization or representing a percentage of the input.
Greta Greta thanks mam
Most welcome 😊
Mam you said that sigmoid function have value between 0-1 but for sigmoid(-10) It had value grater than 1 why?
On that particular slide, the output is not fully written, Check the next slide where the sigmoid code is implemented, the output for sigmoid(-10) is 4.539............e^-05, which means it lies between 0 and 1.
@@radhagarg2518 ok mam, thank u
👍👍
Mam can you provide the ppt or notes of these lectures? It will be more helpful for us
HI, I made these videos long time ago. I dont have the related ppts now.
@@CodeWithAarohi Mam pls make a new deep learning series with notes.
hi mam how i create my own datasets
For Image classification problems. Just create a folder with a name of "dataset" and then inside that folder create seperate folders for each class. For example- "class pen" and other class "pencil". Now download images from internet of pen and pencils and paste them in their corresponding folders.
Can I have your slides?
please
why leaky relu and Elu is not part of this video
Sorry for that. I have covered leaky relu , mish activation functions with the algorithms in other videos.
Maam can you please share the slides ?
❤️💚😘