QWEN 2.5 72b Fully tested (Coding, Logic and Reasoning, Math)
HTML-код
- Опубликовано: 5 фев 2025
- In this video, we put the Qwen2.5 72B model to the test across a variety of tasks to see how it performs. Qwen2.5 is the latest and most powerful model in the Qwen series, boasting 72.7 billion parameters and a range of advanced features, including improved coding capabilities, long-context support, and multilingual support.
We challenge Qwen2.5 with several coding tasks, including building a GUI for converting WAV files to MP3, creating a calculator with a user-friendly interface, and developing the classic Snake game using Pygame. We also test its logic and reasoning skills with puzzles like identifying a heavier ball among eight identical ones and solving water measurement problems. Finally, we tackle some math problems to see how well Qwen2.5 handles basic arithmetic and practical calculations.
Watch as we reveal Qwen2.5’s performance and uncover its strengths and limitations. This comprehensive test showcases the model’s impressive capabilities and highlights areas for improvement.
Patreon : / aifusion
#Qwen2.5 #Qwen2.5_72B #AI #ArtificialIntelligence #LargeLanguageModels #CodingChallenges #Python #Pygame #BackgroundRemoval #LogicPuzzles #Mathematics #ModelTesting #TechReview #AIModels #QwenSeries #MachineLearning #AIResearch #TechInsights #DataScience #AIProgramming #qwen2 #qwen2.5-72b
qwen has finally qwenched my thirst for the fine open llm with good reasoning😃😃😃
would you tell us how to use Gradio for QWEN
To run and chat with any model locally, not just Qwen 2.5, I recommend using LMStudio as it provides a ready-to-use UI. However, if you're looking to integrate local models into projects, I recommend using Ollama, as it offers an OpenAI-compatible API, allowing you to use local models in your applications.
@@AIFusion-official is it the same one as you used?
Qwen2.5-coder or Qwen2.5 72b
Which one will be better at coding ?
Thanks for your question! I can't say definitively which model would be better at coding without testing them side by side. The QWEN 2.5 72B has a substantial advantage with its 72 billion parameters, while the QWEN 2.5 Coder currently only has two versions available: one with 1.5 billion parameters and another with 7 billion parameters. Although the Coder is specialized in coding tasks, the significant difference in parameter size could impact performance.
How does this compare to Mistral Large 2?
which website you're using?
spaces on hugging faces