Oh nifty! So now the real question: how hard would it be to reproduce their ML closed source thing with open source? Interesting to learn how these metal oxide sensors actually manage to be made for all those different gases.
Very cool that you can train your BME688 to recognize a particular VOC. But assuming you are either unable or unwilling to do that, what value is the BME688 over the BME680 right out of the box? It would be nice if these training profiles were already available, and you could just select the ones of interest.
I was about to get me the 680 then this video was recommended to me by the holy youtube algorithm. I wonder if there's any collection for different data profiles available somewhere? Similar to TensorFlow?
93% is the accuracy of the model which is actually very good. When you train a model you give it training data to train with and then validation data that it's never seen before the test against. So when it evaluated the validation data it was able to tell what it was 93% of the time. Pretty much the other 6% or so it either classified it as something else that it was trained to classify or maybe it classified it as unknown.
Oh nifty! So now the real question: how hard would it be to reproduce their ML closed source thing with open source?
Interesting to learn how these metal oxide sensors actually manage to be made for all those different gases.
Very cool that you can train your BME688 to recognize a particular VOC. But assuming you are either unable or unwilling to do that, what value is the BME688 over the BME680 right out of the box?
It would be nice if these training profiles were already available, and you could just select the ones of interest.
I was about to get me the 680 then this video was recommended to me by the holy youtube algorithm.
I wonder if there's any collection for different data profiles available somewhere? Similar to TensorFlow?
I could see this being used as a sort of Carbon Monoxide sensor in a custom thermostat.
I'd be interested to know what that 93% represents. I.e., what gave the false positives? Was it immersed in jars of ... empty? water? chocolate?
93% is the accuracy of the model which is actually very good. When you train a model you give it training data to train with and then validation data that it's never seen before the test against. So when it evaluated the validation data it was able to tell what it was 93% of the time. Pretty much the other 6% or so it either classified it as something else that it was trained to classify or maybe it classified it as unknown.