![Data Science in your pocket](/img/default-banner.jpg)
- Видео 665
- Просмотров 535 311
Data Science in your pocket
Индия
Добавлен 11 июн 2019
Welcome to Data Science In Your Pocket! Dive deep into advanced AI and Data Science topics not thoroughly covered elsewhere. Discover comprehensive insights and answers to complex questions, all in one place. Subscribe for in-depth tutorials and expert knowledge on cutting-edge data science and AI.
We have tutorials on Generative AI, Reinforcement Learning, NLP, Time Series, Graph Analytics and other major Data Science domains
We have tutorials on Generative AI, Reinforcement Learning, NLP, Time Series, Graph Analytics and other major Data Science domains
Llama Coder : Build any web application using Generative AI
This video demonstrates the LlamaCoder, a Generative AI application to build any web application using React. The demo for the app is also displayed and changes can be made
#software #react #ai #code #llama3
#software #react #ai #code #llama3
Просмотров: 113
Видео
Stable Diffusion vs Flux
Просмотров 2377 часов назад
This video compares Flux and Stable Diffusion for image generation over a set of prompts to check which is better #stablediffusion #midjourney #ai #imagegeneration
Midjourney vs Flux
Просмотров 2647 часов назад
This video tests both Midjourney and Flux text to image models on a set of prompts side by side to check which is better #midjourney #stablediffusion #ai #imagegeneration
Flux text to image Free API
Просмотров 4517 часов назад
This video shows how Flux, the latest open-sourced model by black-forest-labs can be used using free Huggingface API key to generate images, competing with stable-diffusion and midjourney #stablediffusion #midjourney #ai #texttoimage
Google Gemma2 2B codes explained
Просмотров 5010 часов назад
Google released Gemma2 2B model which has beaten GPT 3.5 on many metrics. checkout how to use it in this tutorial #google #llm #generativeai #chatgpt
GraphRAG vs RAG : Which is better? code comparison
Просмотров 72010 часов назад
This video compares GraphRAG vs standard RAG over text dataset and set of prompts to check which RAG performs better. Find all the codes below #ai #coding #generativeai #ml
Llama 3.1 fine-tuning codes explained
Просмотров 44715 часов назад
This video demonstrates how to fine-tune Llama 3.2 using unsloth and LoRA with code explanations #ai #finetuning #llm #ml #llama3
How to visualise a Knowledge Graph
Просмотров 18215 часов назад
This video explains how to visualise a GraphRAG knowledge graph created using LLMs #ai #llm #graphs #visualization
AI vs Human? Who is better at what by AIQ
Просмотров 18419 часов назад
AI vs Human? Who is better at what by AIQ
How to choose the best threshold for ML classification problems ?
Просмотров 7022 часа назад
How to choose the best threshold for ML classification problems ?
Testing Llama 3.1 multimodal capabilities using Meta.ai playground
Просмотров 913День назад
Testing Llama 3.1 multimodal capabilities using Meta.ai playground
Chat with Llama 3.1 405B model for free
Просмотров 2,4 тыс.День назад
Chat with Llama 3.1 405B model for free
How to use GPT 3.5 Turbo after OpenAI launched GPT-4o mini?
Просмотров 10114 дней назад
How to use GPT 3.5 Turbo after OpenAI launched GPT-4o mini?
GraphRAG using CSV file and LangChain
Просмотров 90514 дней назад
GraphRAG using CSV file and LangChain
ChatGPT for Landing Page creation (Hubspot)
Просмотров 15114 дней назад
ChatGPT for Landing Page creation (Hubspot)
Instead of viewing the image in the share path how to view directly once the script runs
You can load the webp file using python. Check for Image in jupyter notebook
HOW TO GET A FREE GOOGLE API KEY?
See pinned comment
midjourney better
Yes, but flux is still good
Interesting comparison! Looking forward to a more detailed one that includes memory usage, performance on subtle aspects of human images like hands, eyes etc.
Sure, will be covering that soon
Hi, I packed the app and run well with PC installed python. However when testing on a non dev computer without python installed it crashes. Is there anyway to resolve?
What's the error?
The error is. ImportError: DLL load failed: The specified module could not be found. The module specified is pandas. As I test on some PC (Python installed) it run well, but for user PC the error exist
Need to check. Worked for many non-tech guys pc
But you told about instruct which only generate text :( not images
A really nice observation. The definition I explained on that comment was from the perspective of text based models. I'm not very sure how the definition will change for multi-modal LLMs as text completion won't make sense for images. Let me check and get back to you. Thanks
Most Underrated channel for AI 🔥
Means a lot😁
There is a significant difference in the pre-process between knowledge graphs and creating embeddings. Although the base text may be the same, the pre-processing for knowledge graphs is richer and more detailed compared to embeddings. Additionally, in larger datasets, working with knowledge graphs can be slower and more expensive. However, in a simple example, enriching the text with metadata for retrieval-augmented generation (RAG) can achieve the same or even better results than a knowledge graph.
Yepp, that's why clearly mentioned this is done in default settings
GraphRAG crash course : datasciencepocket.gumroad.com/l/jtzbtp
excellent
Thanks☺️
my exe pop a command prompt then close without open browser tab, any suggestion?
Try using a screen recorder. Than record the crash and investigate the error
great video thank you
You are a hero
how many request can we hit through this groq
Codes : datasciencepocket.gumroad.com/l/jtzbtp
Really interesting stuff!!
Good effort guys.
I followed the steps to the t but i am not getting a response text - its like stuck on loading - 0s from langchain import HuggingFacePipeline, PromptTemplate, LLMChainprompt=PromptTemplate.from_template("Tell me about {entity} in short")llm = HuggingFacePipeline(pipeline=pipeline)chain = LLMChain (llm=llm, prompt=prompt) [37] chain.run('india') Truncation was not explicitly activated but `max_length` is provided a specific value, please use `truncation=True` to explicitly truncate examples to max length. Defaulting to 'longest_first' truncation strategy. If you encode pairs of sequences (GLUE-style) with the tokenizer you can select this strategy more precisely by providing a specific strategy to `truncation`. Setting `pad_token_id` to `eos_token_id`:128001 for open-end generation.
never mind i changed the runtime to gpu and it gave me the result in 48 seconds
Holistic comparison. Loved it!! 🙌
Insightful 3:27
could you share this colab file please :)
All codes are here : datasciencepocket.gumroad.com/l/jtzbtp
GraphRAG crash course: datasciencepocket.gumroad.com/l/jtzbtp
Code please
is there are code to visulize the knowledge graph?
Should be easy using networkx. Will cover shortly
Groq : ruclips.net/video/QSyRoOO4pXE/видео.htmlsi=DOabl-3yQBrAOnqC Google Gemini : ruclips.net/video/J8ksL3oqqUE/видео.htmlsi=PtqAXKFIM2LSXWET
Would love to see a follow up using RAG and then fine tuning llama 3.1
Sure, will cover that soon
@@datascienceinyourpocket that would be awesome
Link?
huggingface.co/spaces/Nymbo/Llama-3.1-405B-Instruct
Get all the codes here : datasciencepocket.gumroad.com/l/jtzbtp
Get all the codes here : datasciencepocket.gumroad.com/l/jtzbtp
Get all the codes here : datasciencepocket.gumroad.com/l/jtzbtp
Get all the codes here : datasciencepocket.gumroad.com/l/jtzbtp
Get all the codes here : datasciencepocket.gumroad.com/l/jtzbtp
Get all the codes here : datasciencepocket.gumroad.com/l/jtzbtp
Get all the codes here : datasciencepocket.gumroad.com/l/jtzbtp
Good initiative, keep going
Nice one
Hii can you evaluate without open ai key
Yepp, you can
@@datascienceinyourpocket do you have any videos where u used gemini pro for RAG and did evaluation
This is great!
👌👌👌
please provide the notebook link
Kindly use this blog for reference: medium.com/data-science-in-your-pocket/improving-rag-using-langgraph-and-langchain-bb195bfe4b44
Short and precise!
what is the difference between normal 405B and 405B Instruct model?
405B model is a baseline model for text completion only. Any model's instruct version is more for Q&A.
@@datascienceinyourpocket Thanks for quick reply!!
Is nixtla free or paid to use ?
The API is free to create
Hi Could you please make this exe work on mac and windows too
This works in windows. Not sure about mac. Try and let me know the issue if any