thank you very much sir. i clearly understand those terms now. could you tell me the reason why testing accuracy, precision, recall become under 10% while the traning and validation accuracy are more than 90%. my confusion matrix value on the precison, recall and f1 score vlues is showing too less which is under 10. here is the final result sir. Epoch 10/10 164/164 [==============================] - 95s 577ms/step - loss: 0.1356 - accuracy: 0.9502 - val_loss: 0.2857 - val_accuracy: 0.9194 from sklearn.metrics import accuracy_score, confusion_matrix preds = model.predict(test_data) acc = accuracy_score(test_labels, np.round(preds))*100 cm = confusion_matrix(test_labels, np.round(preds)) tn, fp, fn, tp = cm.ravel() print('CONFUSION MATRIX ------------------') print(cm) print(' TEST METRICS ----------------------') precision = tp/(tp+fp)*100 recall = tp/(tp+fn)*100 print('Accuracy: {}%'.format(acc)) print('Precision: {}%'.format(precision)) print('Recall: {}%'.format(recall)) print('F1-score: {}'.format(2*precision*recall/(precision+recall))) print(' TRAIN METRIC ----------------------') print('Train acc: {}'.format(np.round((hist.history['accuracy'][-1])*100, 2))) CONFUSION MATRIX ------------------ [[ 37 197] [375 15]] TEST METRICS ---------------------- Accuracy: 8.333333333333332% Precision: 7.0754716981132075% Recall: 3.8461538461538463% F1-score: 4.983388704318937
@@rachittoshniwal hahaha take your time and thanks for uploading them. Really appreciate you taking the time to read my comment and your content you make.
thank you very much sir. i clearly understand those terms now. could you tell me the reason why testing accuracy, precision, recall become under 10% while the traning and validation accuracy are more than 90%. my confusion matrix value on the precison, recall and f1 score vlues is showing too less which is under 10. here is the final result sir.
Epoch 10/10
164/164 [==============================] - 95s 577ms/step - loss: 0.1356 - accuracy: 0.9502 - val_loss: 0.2857 - val_accuracy: 0.9194
from sklearn.metrics import accuracy_score, confusion_matrix
preds = model.predict(test_data)
acc = accuracy_score(test_labels, np.round(preds))*100
cm = confusion_matrix(test_labels, np.round(preds))
tn, fp, fn, tp = cm.ravel()
print('CONFUSION MATRIX ------------------')
print(cm)
print('
TEST METRICS ----------------------')
precision = tp/(tp+fp)*100
recall = tp/(tp+fn)*100
print('Accuracy: {}%'.format(acc))
print('Precision: {}%'.format(precision))
print('Recall: {}%'.format(recall))
print('F1-score: {}'.format(2*precision*recall/(precision+recall)))
print('
TRAIN METRIC ----------------------')
print('Train acc: {}'.format(np.round((hist.history['accuracy'][-1])*100, 2)))
CONFUSION MATRIX ------------------
[[ 37 197]
[375 15]]
TEST METRICS ----------------------
Accuracy: 8.333333333333332%
Precision: 7.0754716981132075%
Recall: 3.8461538461538463%
F1-score: 4.983388704318937
Why are you doing np.round on preds? They already should be 0s and 1s, right?
Is FN value should be more than FP value?
Hi Rachit, where can I find the slides on the confusion matrix and other performance metrics?
Hi Chirag, you can find them on my laptop xD gimme some time, and you'll find them on GitHub too haha
@@rachittoshniwal hahaha take your time and thanks for uploading them.
Really appreciate you taking the time to read my comment and your content you make.
@@chiragsharma9430 oh it's all right!
@@chiragsharma9430 Yo, they're live now. github.com/rachittoshniwal/machineLearning/tree/master/ppts
@@rachittoshniwal yeah I see thanks for uploading them all. These will be helpful while revising things
hi