Changes

Jump to: navigation, search
no edit summary
lorem ipsumlorem ipsumlorem ipsumThe model, before performing the quantization with the vai_q_tensorflow tool, has an overall value of '''''accuracy of 92.68%''''' and an overall weighted average '''''F1-score of 92.69%''''' over the test subset of the dataset, showing a good generalization capability on unseen samples, although lower than in the three previous models. The classes with the highest F1-score, above 96.00% are: ''resistor'' (97.56% F1-score), ''capacitor'' (96.81% F1-score) and, ''inductor'' (96.38% F1-score). However, the model performance on the three remaining classes is poorly compared w.r.t the previous models, showing an F1-score below 90.00% in the class ''diode'' (87.94% F1-score) and, class ''transistor'' (87.27% F1-score) due to having low precision and recall in the former case (88.38% precision, 87.50% recall) and, low precision in the latter (83.67% precision).
lorem ipsumlorem ipsumlorem ipsumAfter performing the quantization with the vai_q_tensorflow tool and after the deployment on the target device, the model has an overall value of '''''accuracy of 88.87%''''' and an overall weighted average '''''F1-score of 88.91%''''' on the test subset of the dataset. The model is still performing well on ''resistor'' (97.65% F1-score) but, on the other hand for the remaining classes, there is a substantial drop in the value of the metric. The classes that exhibit the worst results are ''diode'' (85.15% F1-score), ''IC'' (83.27% F1-score) and, ''transistor'' (81.97% F1-score). In general, the performance of the model is still good, but it is decidedly lower than the models analyzed previously.
lorem ipsumlorem ipsumlorem ipsumThe model, before performing the quantization with the vai_q_tensorflow tool, has an overall value of '''''accuracy of 97.66%''''' and an overall weighted average '''''F1-score of 97.36%''''' over the test subset of the dataset, showing a very high generalization capability on unseen samples. All the classes have a F1-score above 96.00%, actually very high for the class ''resistor'' (98.50% F1-score).
lorem ipsumlorem ipsumlorem ipsumAfter performing the quantization with the vai_q_tensorflow tool and after the deployment on the target device, the model has an overall value of '''''accuracy of 93.34%''''' and an overall weighted average '''''F1-score of 93.34%''''' on the test subset of the dataset. The model is still performing very well on the capacitor class by keeping a F1-score above 96.00% in three classes i.e ''resistor'' (97.12% F1-score), ''inductor'' (97.00% F1-score) and, ''capacitor'' (96.59% F1-score). However, for the remaining classes, the value of the metric is substantially reduced. The classes that exhibit the worst results are ''IC'' (89.41% F1-score) due to having low precision (84.12% precision) and, ''transistor'' (87.75% F1-score) due to having a very low recall (82.80% recall). In general, the performance of the model is still good, similar to the one obtained with ResNet models.
dave_user
207
edits

Navigation menu