We worked these examples using the Wolfram Language. Socratica offers a pro course, 'Mathematica Essentials,' providing key concepts for mastering Wolfram products: www.socratica.com/courses/mathematica-essentials
"because cats are not vegan they should eat meat" vs "because cats are vegan they should not eat meat" Bag of Words: "It's the same sentence 🤷" In seriousness: is there a way around situations like this, for example by binding the "not" more tightly, or is this simply out of scope for this approach, and the only relevant features are cats and whether or not they are vegan, but with no conclusion if they actually are vegan?
We worked these examples using the Wolfram Language. Socratica offers a pro course, 'Mathematica Essentials,' providing key concepts for mastering Wolfram products:
www.socratica.com/courses/mathematica-essentials
I didn't expect The Foundation, The Adventure of Sherlock Holmes, and War and Peace to be in this video as examples.
Good one team, it's about time we learn about algorithms before they take over.
I think this is really interesting.
"because cats are not vegan they should eat meat"
vs
"because cats are vegan they should not eat meat"
Bag of Words: "It's the same sentence 🤷"
In seriousness: is there a way around situations like this, for example by binding the "not" more tightly, or is this simply out of scope for this approach, and the only relevant features are cats and whether or not they are vegan, but with no conclusion if they actually are vegan?
I just wrote a small tokenizer to fit my needs, now i feel like i have to expand it massively.
Thanks for the video.
What about lemmatization? It isn't used?
WordCount[text]. Where you taking these functions from?
This is a built-in function in the Wolfram Language.
WordCount["string"] gives the total number of words in string.