Hey, I was watching your image captioning metric video. I was looking at the formula and thought that the metric values like METEOR, BLEU, CIDER and others have a range of 0-1 but when I was going through this research paper: Exploring Video Captioning Techniques: A Comprehensive Survey on Deep Learning Methods (PAGE 18), I saw all the metrics were like 10, 30, 64 etc. How do I convert them back to the range of 0-1?
The METEOR score ranges from 0 to 1, with 1 being a perfect match. BLEU scores are typically reported as percentages ranging from 0 to 100, but they can also be expressed as decimals between 0 and 1.
Nice video I have been interested in integrating AI with AR. Now I feel motivated to start.
Go for it!
Hey, I was watching your image captioning metric video. I was looking at the formula and thought that the metric values like METEOR, BLEU, CIDER and others have a range of 0-1 but when I was going through this research paper: Exploring Video Captioning Techniques: A Comprehensive Survey on Deep Learning Methods (PAGE 18), I saw all the metrics were like 10, 30, 64 etc. How do I convert them back to the range of 0-1?
The METEOR score ranges from 0 to 1, with 1 being a perfect match. BLEU scores are typically reported as percentages ranging from 0 to 100, but they can also be expressed as decimals between 0 and 1.
@@activelearning4386 Understood👍👍