This was my Final Year Project, although my angle was to have it detect at night near human settlements since wild animals usually come out around that time. Cool stuff, mine ended at prototyping phase due to limited budget, but its kinda cool to see you take things much further.
hey i just preprocessed the data ,building tokens,vocab,and everything from scratch using pytorch and nltk i and trained a model which predict the next word and i saved the model for later use, then i decide to train the model on a different dataset so i loaded the previous model to train on a new dataset but there is an error for vocab size mismatch because the dataset i used to train the model first time is different from the dataset i used second time to train so i am just wondering what can i do to train the model on different dataset by using my own model trained on some data. i am using the transformer architecture for this i hope i explained it well because my english is not that good i am using word level tokenizer
What a cool project! Would love to pick your brain some time about something similar I'm working on.
This was my Final Year Project, although my angle was to have it detect at night near human settlements since wild animals usually come out around that time. Cool stuff, mine ended at prototyping phase due to limited budget, but its kinda cool to see you take things much further.
Awesome! This is just a prototype at this stage, I'm currently working on the next version .
That's nice if you would leave it somwhere where there are a lot of animals it could create a big dataset xD
Yeap!
wonder how many species it detected total
Species identified is one of the things in working on now!
@LukeDitria great keep up the good work !
How expensive was the training?
I didn't do any training?
hey
i just preprocessed the data ,building tokens,vocab,and everything from scratch using pytorch and nltk i and trained a model which predict the next word and i saved the model for later use, then i decide to train the model on a different dataset so i loaded the previous model to train on a new dataset but there is an error for vocab size mismatch because the dataset i used to train the model first time is different from the dataset i used second time to train so i am just wondering what can i do to train the model on different dataset by using my own model trained on some data.
i am using the transformer architecture for this
i hope i explained it well because my english is not that good
i am using word level tokenizer
Replied to your discord comment