Changes

Jump to: navigation, search
no edit summary
|}
After performing the quantization with the ''vai_q_tensorflow'' tool and after the deployment on the target device, the model has an overall value of '''''accuracy of 93.27%''''' and an overall weighted average '''''F1-score of 93.29%''''' on the test subset of the dataset. The model is still performing well in correcly classify samples of the ''resistor'' class (98.08% F1-score), ''inductor'' class(97.10% F1-score) and, ''capacitor'' class (96.88% F1-score). The worst results of the model in the classification task can be found in the ''transistor'' class (89.78% F1-score) because both its measured precision and recall metrics are below 90.00% (89.96% precision and, 89.60% recall) and, in the ''diode'' class (88.59% F1-score) because the precision metric is very low (83.77% precision).
{| align="center" style="background: transparent; margin: auto; width: 60%;"
|}
After performing the quantization with the ''vai_q_tensorflow'' tool and after the deployment on the target device, the model has an overall value of '''''accuracy of 93.95%''''' and an overall weighted average '''''F1-score of 93.91%''''' on the test subset of the dataset. The model is still performing very well in correcly classify samples of the ''capacitor'' class by keeping the F1-score above 96.00% (97.03% F1-score). On the other hand for the remaining classes, there is a substantial reduction in the value of this metric. The classes that exhibit the worst results are ''diode'' class (92.09% F1-score) and, ''IC'' class (92.06% F1-score) because both class shows a low value of the recall metric (90.30% recall for the former, 88.20% recallfor the latter). In general, the performance of the model is still good, similar to the one obtained with the ResNet50 model.
{| align="center" style="background: transparent; margin: auto; width: 60%;"
|}
The model, before performing the quantization with the ''vai_q_tensorflow'' tool, has an overall value of '''''accuracy of 96.46%''''' and an overall weighted average '''''F1-score of 96.48%''''' over the test subset of the dataset, showing a good generalization capability on unseen samples. The classes with the highest F1-score, above 96.00% are respectively ''resistor'' class (98.58% F1-score), ''inductor'' class (98.03% F1-score) and, ''capacitor'' class (96.99% F1-score), a result quite similar to ResNet50 model. The worst performance is the one displayed by the class ''transistor'' class by having "only" a F1-score around 94.00% (94.18% F1-score) mainly due to bacause the model exhibits a low value of the precision metric in this class (92.89% precision).
{| align="center" style="background: transparent; margin: auto; width: 60%;"
|}
After performing the quantization with the ''vai_q_tensorflow'' tool and after the deployment on the target device, the model has an overall value of '''''accuracy of 93.40%''''' and an overall weighted average '''''F1-score of 93.36%''''' on the test subset of the dataset. The model is still performing very well on in correcly classify samples of the ''capacitor'' class by keeping a F1-score above 96.00% (96.62% F1-score) but, on . On the other hand for the remaining classes, there is a substantial drop reduction in the value of the this metric. The classes that exhibit the worst results are ''diode'' class (91.65% F1-score) because the value of the recall metric is very low (87.30% recall), ''IC'' class (91.09% F1-score) having a low value of both precision and recall metrics (91.18% precision, 91.00% recall) and, ''transistor'' (90.62% F1-score) having a low value of precision and recall (90.35% precision, 90.62% recall)in the same way as the previous case. In general, the performance of the model is still good, similar to the one performance obtained with two previous models, especially similar to the one of ResNet101 model.
{| align="center" style="background: transparent; margin: auto; width: 60%;"
dave_user
207
edits

Navigation menu