Changes

Jump to: navigation, search
no edit summary
|}
After performing the quantization with the ''vai_q_tensorflow'' tool and after the deployment on the target device, the model has an overall value of '''''accuracy of 93.40%''''' and an overall weighted average '''''F1-score of 93.36%''''' on the test subset of the dataset. The model is still performing very well in correcly classify samples of the ''capacitor'' class by keeping a F1-score above 96.00% (96.62% F1-score). On the other hand for the remaining classes, there is a substantial reduction in the value of this metric. The classes that exhibit the worst results are ''diode'' class (91.65% F1-score) because the value of the recall metric is very low (87.30% recall), ''IC'' class (91.09% F1-score) by having a low value of measured for both precision and recall metrics (91.18% precision, 91.00% recall) and, ''transistor'' (90.62% F1-score) having a low value of precision and recall (90.35% precision, 90.62% recall) in the same way as the previous case. In general, the performance of the model is still good, similar to the performance obtained with two previous models, especially to the one of ResNet101 model.
{| align="center" style="background: transparent; margin: auto; width: 60%;"
|}
The model, before performing the quantization with the vai_q_tensorflow tool, has an overall value of '''''accuracy of 92.68%''''' and an overall weighted average '''''F1-score of 92.69%''''' over the test subset of the dataset, showing a good generalization capability on unseen samples, although lower than in the three previous ResNet models. The classes with the highest F1-score, above 96.00% are: ''resistor'' (97.56% F1-score), ''capacitor'' (96.81% F1-score) and, ''inductor'' (96.38% F1-score). However, the model performance on the classification task in the three remaining classes is poorly compared w.r.t the previous models, showing an F1-score below 90.00% in the class ''diode'' class (87.94% F1-score) and, class ''transistor'' class (87.27% F1-score) due to having because both classes have a low value in precision and recall in metrics for the former case (88.38% precision, 87.50% recall) and, a low value in precision in metric for the latter (83.67% precision).
{| align="center" style="background: transparent; margin: auto; width: 60%;"
dave_user
207
edits

Navigation menu