Changes

Jump to: navigation, search
no edit summary
|}
After performing the quantization with the ''vai_q_tensorflow'' tool and after the deployment on the target device, the model has an overall value of '''''accuracy of 93.40%''''' and an overall weighted average '''''F1-score of 93.36%''''' on the test subset of the dataset. The model is still performing very well in correcly classify samples belonging to the ''capacitor'' class by keeping a F1-score above 96.00% (96.62% F1-score). On the other hand for the remaining classes, there is a substantial reduction in the value of this metric. The classes that exhibit the worst results are ''diode'' class (91.65% F1-score) because the value of the recall metric is very low (87.30% recall), ''IC'' class (91.09% F1-score) by having a low value measured for both precision and recall metrics (91.18% precision, 91.00% recall) and, ''transistor'' class (90.62% F1-score) having a low value of precision and recall (90.35% precision, 90.62% recall) in the same way as the previous case. In general, the performance of the model is still good, similar to the performance obtained with two previous models, especially to the one of ResNet101 model.
{| align="center" style="background: transparent; margin: auto; width: 60%;"
|}
The model, before performing the quantization with the vai_q_tensorflow tool, has an overall value of '''''accuracy of 92.68%''''' and an overall weighted average '''''F1-score of 92.69%''''' over the test subset of the dataset, showing a good generalization capability on unseen samples, although lower than in the three ResNet models. The classes with the highest F1-score, above 96.00% are: ''resistor'' (97.56% F1-score), ''capacitor'' (96.81% F1-score) and, ''inductor'' (96.38% F1-score). However, the model performance on the classification task in the three remaining classes is poorly compared w.r.t the previous models, showing an F1-score below 90.00% in the ''diode'' class (87.94% F1-score) and, ''transistor'' class (87.27% F1-score) because both classes have a low value in of precision and recall metrics for the former (88.38% precision, 87.50% recall) and, a low value in precision metric for the latter (83.67% precision).
{| align="center" style="background: transparent; margin: auto; width: 60%;"
|}
The model, before performing the quantization with the vai_q_tensorflow tool, has an overall value of '''''accuracy of 97.66%''''' and an overall weighted average '''''F1-score of 97.36%''''' over the test subset of the dataset, showing a very high generalization capability on unseen samples. All the classes have a F1-score above 96.00%, actually very high for the class ''resistor'' class (98.50% F1-score).
{| align="center" style="background: transparent; margin: auto; width: 60%;"
|}
After performing the quantization with the ''vai_q_tensorflow tool '' and after the deployment on the target device, the model has an overall value of '''''accuracy of 93.34%''''' and an overall weighted average '''''F1-score of 93.34%''''' on the test subset of the dataset. The model is still performing very well in three classes i.e correcly classify samples belonging to ''resistor'' class (97.12% F1-score), ''inductor'' class (97.00% F1-score) and, ''capacitor'' class (96.59% F1-score) by keeping a F1-score above 96.00%. However, for the remaining classes, the value of the metric is substantially reduced. The classes that exhibit the worst results are ''IC'' class(89.41% F1-score) due to having because of a low value measured for precision metric (84.12% precision) and, ''transistor'' class (87.75% F1-score) due to having because of a very low value of the recall metric (82.80% recall). In general, the performance of the model is still good, similar to the one obtained with ResNet models.
{| align="center" style="background: transparent; margin: auto; width: 60%;"
|}
The model, before performing the quantization with the ''vai_q_tensorflow '' tool, has an overall value of '''''accuracy of 97.53%''''' and an overall weighted average '''''F1-score of 97.53%''''' over the test subset of the dataset, showing a very high generalization capability on unseen samples. Five classes have a F1-score above 96.00%, actually very high for class ''inductor'' class (98.66% F1-score)and, class ''resistor'' class (98.55% F1-score). The worst result is the one displayed by the class ''transistor'' class by having a F1-score below 96.00% but, still very close (95.86% F1-score) mainly due to a low value of the precision metric (93.36% precision).
{| align="center" style="background: transparent; margin: auto; width: 60%;"
|}
After performing the quantization with the ''vai_q_tensorflow'' tool and after the deployment on the target device, the model has an overall value of '''''accuracy of 93.34%''''' and an overall weighted average '''''F1-score of 93.34% ''''' on the test subset of the dataset. The model is still performing very well in two classes i.e correcly classify samples belonging to ''resistor'' class (98.07% F1-score) and, ''capacitor'' class (96.23% F1-score) by keeping a F1-score above 96.00%. However, for the remaining classes, the value of the metric is reduced. In particular the worst results can be found in the class ''IC'' class (90.80% F1-score) by having a low value for of precision and recall metrics (91.73% precision, 89.90% recall) and, class ''transistor'' due to have class by haveing a low value of precision metric (87.88% precision).
{| align="center" style="background: transparent; margin: auto; width: 60%;"
dave_user
207
edits

Navigation menu