Deepak John Reji
Deepak John Reji
  • Видео 114
  • Просмотров 155 812
Podcast #55 - Data as a Product and Data Products
This is a conversation with Supreet Kaur, where we dive into the fascinating world of data products and their growing significance in today's technological landscape. We explore what defines a data product and discuss how organisations can leverage these powerful tools to gain a competitive edge.
Join us as Supreet breaks down the key factors that make data products effective and examines their impact on traditional data management and governance practices. We'll highlight real-world examples of successful data products that have revolutionized industries and businesses, showcasing their transformative power.
Developing and monetizing data products comes with its own set of challenges and c...
Просмотров: 94

Видео

Podcast #53 - Data-Driven Insights: Unleashing the Power of Measurement and Experimentation
Просмотров 280Год назад
This is a conversation with Vanessa Pizante, where we delve into the realm of data-driven insights, uncovering the transformative power of measurement and experimentation. In this episode, we explore the profound impact of these practices on decision-making and overall business success. Discover how a scalable and adaptable measurement framework can be a game-changer for businesses, enabling th...
Podcast #52 - Fast Forward: How to Harness the Potential of AI for a Sustainable Future
Просмотров 119Год назад
This is a conversation with Alice Schmidt, a prominent face in the realm of global sustainability, business consultancy, and authorship. With 25 years of experience at the crossroads of social, environmental, and economic affairs, Alice's insights are invaluable. Uncover the future possibilities of AI as we discuss harnessing its potential for a more sustainable future. Alice Schmidt Alice Schm...
Question Answering from PDF using OpenAI ChatGPT API and GPT4ALL Library
Просмотров 3,7 тыс.Год назад
In this comprehensive tutorial, we'll explore how to build an intelligent question-answering system that can extract answers from PDF documents. By combining the cutting-edge power of the OpenAI ChatGPT API and leveraging the open-source embedding capabilities of the GPT4ALL library, we'll show you how to create a robust and accurate solution to tackle this challenging task. 📋 Outline: 1. Setti...
Information Extraction with GPT4ALL Models and Langchain Components | Video Tutorial
Просмотров 2,6 тыс.Год назад
In this video tutorial, you will learn how to harness the power of the GPT4ALL models and Langchain components to extract relevant information from a dataset efficiently and with minimal lines of code. This tutorial will equip you with the knowledge to effectively retrieve valuable insights from your data. We will begin by introducing the GPT4ALL ecosystem (docs.gpt4all.io/index.html). You will...
Podcast #51 - The Future of Textile Manufacturing
Просмотров 3,2 тыс.Год назад
This is a conversation with Garett Gerson, the visionary founder and CEO of VARIANT3D. Join us as we delve into the future of textile manufacturing and explore how 3D knitting technology is revolutionizing the industry. Discover how VARIANT3D's disruptive platform paves the way for sustainable, customizable, and on-demand textiles with near-zero waste. From discussing the impact on the environm...
Podcast #50 - Empowering Data Science and Open-Source Python
Просмотров 90Год назад
This is a captivating conversation with Sophia Yang, a Senior Data Scientist and Developer Advocate at Anaconda. Join us as we delve into the world of data science and open-source development with one of the industry's most inspiring voices. Sophia's extensive contributions to the Python open-source community, including her authorship of various libraries such as condastats, cranlogs, PyPowerUp...
Podcast #49 - Redefining Fashion: A Deep Dive into Circular Consumption and Sustainable Fashion
Просмотров 199Год назад
This is a conversation with Sarah Garner, a visionary entrepreneur who is revolutionizing the fashion industry with her sustainable and circular fashion platform, Retykle. Join us as we sit down with Sarah to delve into her inspiring journey and the incredible impact she is making on the world. In this Podcast, we'll explore Sarah's mission to create a more sustainable future for fashion, tackl...
Train your custom Speech Recognition Model with Hugging Face models
Просмотров 16 тыс.Год назад
This tutorial will show you how to train a custom voice recognition model using Hugging face models. With the increasing popularity of voice-enabled devices and services, having accurate and reliable voice recognition is crucial. By training your own custom voice recognition model, you can improve the accuracy of your voice-enabled applications and services, and tailor them to your specific nee...
Podcast #48 - Beyond the Code: Navigating the Ethical Landscape of AI
Просмотров 396Год назад
This is a conversation with Ravit Dotan, an expert in AI ethics and responsible AI governance. In this episode, we delve into the crucial topic of AI ethics implementation and ensuring it is grounded in genuine moral principles that respect all stakeholders. Ravit shares her insights on the need for a comprehensive and coherent framework of ethical guidelines, guidance for AI developers and use...
Podcast #47 - How Humans Perceive Their Relationship with AI
Просмотров 493Год назад
This is a conversation with Marisa Tschopp, a researcher and expert on the intersection of technology and human psychology. In this episode, we explore the fascinating topic of how humans perceive their relationship with artificial intelligence. Marisa takes us on a journey through the world of AI, discussing the concept of anthropomorphism and how it shapes our perceptions of technology. We de...
Podcast #46 - Unpacking the Future: A Deep Dive into Gen Z in Tech
Просмотров 674Год назад
This is a conversation with Brooke Joseph, a 17-year-old innovator and aspiring artificial intelligence expert at The Knowledge Society (TKS). In this episode, we explore the intersection of Gen Z and technology and how Brooke has found her passion for AI and Federated Learning. We dive into how the increasing use of technology in everyday life affects the way Gen Z interacts with and understan...
Podcast #45 - Digital Transformation: Challenges from the perspective of the public sector & society
Просмотров 670Год назад
Podcast #45 - Digital Transformation: Challenges from the perspective of the public sector & society
Podcast #44 - Optimizing Job Shop Manufacturing with AI Scheduling
Просмотров 958Год назад
Podcast #44 - Optimizing Job Shop Manufacturing with AI Scheduling
Podcast #43 - Revolutionizing Mental Health Care with Digital Technology
Просмотров 760Год назад
Podcast #43 - Revolutionizing Mental Health Care with Digital Technology
Podcast #42 - Thought Leadership 101: How to Stand Out in Your Industry
Просмотров 731Год назад
Podcast #42 - Thought Leadership 101: How to Stand Out in Your Industry
Podcast #41 - ChatGPT and the Future of the Legal Profession
Просмотров 564Год назад
Podcast #41 - ChatGPT and the Future of the Legal Profession
Podcast #40 - Climate Optimism: Reimagining Our Future
Просмотров 556Год назад
Podcast #40 - Climate Optimism: Reimagining Our Future
Podcast #39 - Grounded Language Understanding
Просмотров 860Год назад
Podcast #39 - Grounded Language Understanding
Podcast #38 - Large language models, Applications & Implications
Просмотров 1,2 тыс.Год назад
Podcast #38 - Large language models, Applications & Implications
Podcast #36 - The role of open source in career development
Просмотров 449Год назад
Podcast #36 - The role of open source in career development
Podcast #37 - Data-centric NLP in the era of LLMs
Просмотров 448Год назад
Podcast #37 - Data-centric NLP in the era of LLMs
Create API for Question Answering pipeline using FastAPI
Просмотров 1,3 тыс.Год назад
Create API for Question Answering pipeline using FastAPI
Gibberish words detection with Python
Просмотров 1,3 тыс.Год назад
Gibberish words detection with Python
Invoke ChatSonic API with Python
Просмотров 1,3 тыс.Год назад
Invoke ChatSonic API with Python
Build Question Answering pipeline with Transformers
Просмотров 836Год назад
Build Question Answering pipeline with Transformers
Podcast #35 - Content Creation as a Viable Career: Tips, Challenges, and Success Stories
Просмотров 389Год назад
Podcast #35 - Content Creation as a Viable Career: Tips, Challenges, and Success Stories
Podcast #34 - Azure Machine Learning - ML as a Service
Просмотров 622Год назад
Podcast #34 - Azure Machine Learning - ML as a Service
Clustering with embed-clustering package
Просмотров 803Год назад
Clustering with embed-clustering package
Biomedical Named Entity Recognition with Transformers
Просмотров 6 тыс.Год назад
Biomedical Named Entity Recognition with Transformers

Комментарии

  • @Monchalance1
    @Monchalance1 Месяц назад

    Is he from Asia

  • @AdityaPatel-f1c
    @AdityaPatel-f1c Месяц назад

    your voice_cloning python module have several issues

    • @deepakjohnreji
      @deepakjohnreji 11 дней назад

      This experiment was done as a research project to test out previous approaches. Thia package has a lot of flaws, but these approaches are a great alternative to the paid services available.

  • @ShashwatSingh-j4w
    @ShashwatSingh-j4w Месяц назад

    Please take a look at this code and tell me why it is not giving output for the sentence that is out of training data. import spacy from spacy.training import Example import random from spacy.util import minibatch nlp=spacy.load("en_core_web_md") from spacy.training import offsets_to_biluo_tags from sklearn.metrics import precision_recall_fscore_support from sklearn.metrics import accuracy_score nlp = spacy.load("en_core_web_md") def validate_alignment(text, entities): doc = nlp.make_doc(text) biluo_tags = offsets_to_biluo_tags(doc, entities) print(f"Text: '{text}'") print(f"Entities: {entities}") print(f"BILUO Tags: {biluo_tags}") TRAINING_DATA = [ ("My name is John", {"entities": [(11, 15, "PERSON")]}), ("I am John", {"entities": [(5, 9, "PERSON")]}), ("John this side", {"entities": [(0, 4, "PERSON")]}), ("Hello, my name is Jane Doe.", {"entities": [(17, 26, "PERSON")]}), ("Jane is my friend.", {"entities": [(0, 4, "PERSON")]}), ("I spoke to John yesterday.", {"entities": [(15, 19, "PERSON")]}), ("John and Jane are working together.", {"entities": [(0, 4, "PERSON"), (9, 13, "PERSON")]}), ("Delhi is a great city.", {"entities": [(0, 5, "PLACE")]}), ("Mumbai is a great city.", {"entities": [(0, 6, "PLACE")]}), ("Kolkata is a great city.", {"entities": [(0, 7, "PLACE")]}), ("Cheenai is a great city.", {"entities": [(0, 8, "PLACE")]}), ("Patna is a great city.", {"entities": [(0, 5, "PLACE")]}), ("Lucknow is a great city.", {"entities": [(0, 7, "PLACE")]}), ("The conference will be held in Delhi and Mumbai.", {"entities": [(32, 37, "PLACE"), (42, 48, "PLACE")]}), ("Add the pipeline name Pipeline1.", {"entities": [(22, 31, "PIPELINE")]}), ("Add the pipeline name PipelineXY.", {"entities": [(22, 32, "PIPELINE")]}), ("Add the pipeline name PipelineA.", {"entities": [(22, 31, "PIPELINE")]}), ("Add the pipeline name PipelineTest.", {"entities": [(22, 34, "PIPELINE")]}), ("My pipeline name is PipelineAB.", {"entities": [(22, 31, "PIPELINE")]}), ("The latest version is Pipeline2.", {"entities": [(21, 30, "PIPELINE")]}), ("We are using Pipeline3 for this project.", {"entities": [(19, 28, "PIPELINE")]}), ("The pipeline PipelineX was used for processing data from the SourceName MongoDb.", {"entities": [(4, 14, "PIPELINE"), (37, 44, "SOURCE")]}), ("PipelineY is the new update and it will be integrated with PipelineAB.", {"entities": [(0, 9, "PIPELINE"), (37, 45, "PIPELINE")]}), ("The SourceName will be MongoDb", {"entities": [(23, 30, "SOURCE")]}), ("The SourceName will be Postgres", {"entities": [(23, 31, "SOURCE")]}), ("The SourceName will be SQL", {"entities": [(23, 26, "SOURCE")]}), ("The SourceName will be SNOWFLAKE", {"entities": [(23, 32, "SOURCE")]}), ("Give Source as the Postgres", {"entities": [(19, 27, "SOURCE")]}), ("Give Source as the MongoDb", {"entities": [(19, 26, "SOURCE")]}), ("I want my SourceName to be SQL and DestinationName to be MongoDb.", {"entities": [(27, 30, "SOURCE"), (42, 49, "DESTINATION")]}), ("I can take source as SNOWFLAKE.",{"entities":[(21,30,"SOURCE")]}), ("The DestinationName is MongoDb", {"entities": [(23, 30, "DESTINATION")]}), ("The DestinationName is Postgres", {"entities": [(23, 31, "DESTINATION")]}), ("The DestinationName is SQL", {"entities": [(23, 26, "DESTINATION")]}), ("I want DestinationName as SNOWFLAKE", {"entities": [(26, 35, "DESTINATION")]}), ("I want my DestinationName to be SQL and SourceName to be Postgres.", {"entities": [(32, 35, "DESTINATION"), (50, 57, "SOURCE")]}), ("The Script will be a Spark.", {"entities": [(21, 26, "SCRIPT")]}), ("The Script will be a Python.", {"entities": [(21, 27, "SCRIPT")]}), ("I am using Python for my Script.", {"entities": [(15, 21, "SCRIPT")]}), ("The chosen Script is Spark and it will work with PipelineA.", {"entities": [(18, 23, "SCRIPT"), (42, 51, "PIPELINE")]}), ("I am choosing left-inner join", {"entities": [(14, 24, "JOINS")]}), ("I am choosing left-outer join", {"entities": [(14, 24, "JOINS")]}), ("The process includes a cross join", {"entities": [(18, 23, "JOINS")]}), ("Using a left join is preferred over an inner join", {"entities": [(9, 18, "JOINS"), (32, 41, "JOINS")]}), ("I want a cross join",{"entities":[(9,14,"JOINS")]}), ("I want a inner join",{"entities":[(9,14,"JOINS")]}) ] for text, annotations in TRAINING_DATA: validate_alignment(text, annotations['entities']) if "ner" not in nlp.pipe_names: ner = nlp.create_pipe("ner") nlp.add_pipe(ner, last=True) else: ner = nlp.get_pipe("ner") custom_labels = ["TECHNOLOGY", "PIPELINE", "PLACE", "SOURCE","DESTINATION","SCRIPT","JOINS"] for label in custom_labels: if label not in ner.labels: ner.add_label(label) train_data = [] for text, annotations in TRAINING_DATA: doc = nlp.make_doc(text) example = Example.from_dict(doc, annotations) train_data.append(example) optimizer = nlp.begin_training() for epoch in range(50): random.shuffle(train_data) losses = {} for batch in minibatch(train_data, size=8): for example in batch: nlp.update([example], drop=0.5, losses=losses) print(f"Epoch {epoch} - Losses: {losses}") nlp.to_disk("/home/datatroops/Session") nlp = spacy.load("/home/datatroops/Session") def extract_entities(text): doc = nlp(text) entities = {ent.label_: ent.text for ent in doc.ents} return entities texts = [ "PipelineXYZ is the new.", "The Pipeline name is PipelineA.", "The SourceName will be SNOWFLAKE", "The DestinationName is MongoDb", "The Script will be a Spark.", "PipelineXY is the new update.", "I am choosing lefti join", "I want my SourceName to be SNOWFLAKE", "SnowFlake", "I can take source as SNOWFLAKE.", "Give source as SNOWFLAKE.", "My script will be Spark.", # "My name is John.", # "My name is Shashwat.", "I want a cross join", "Using a left join is preferred over an inner join", "I want a inner join", "I want an outer join" ] for text in texts: print(f"Text: '{text}'") print("Extracted entities:", extract_entities(text))

  • @rajkumarj2117
    @rajkumarj2117 2 месяца назад

    if any model has for resume

  • @eliedisso1420
    @eliedisso1420 2 месяца назад

    great vid, I just have a quick question, How do I make a permanent link to share to others

  • @LuckyPratama71
    @LuckyPratama71 2 месяца назад

    is there any annotation tool for the preparing spacy ner data?

  • @VelazquezJFP
    @VelazquezJFP 3 месяца назад

    Thank you!

  • @Foodie_Place
    @Foodie_Place 4 месяца назад

    hello sir, does this model detect filler words ?

  • @sumanpathak-r6p
    @sumanpathak-r6p 4 месяца назад

    Its not working. ner_prediction(corpus=doc, compute='cpu') AttributeError: 'DataFrame' object has no attribute 'append'

  • @vinayk9490
    @vinayk9490 5 месяцев назад

    instead of training an NER is there any way to pass a certain data into the spacy model i.e can we pass the custom data inside a spacy model?

  • @2000coque
    @2000coque 5 месяцев назад

    Good video, so much tanks. You helped me a lot.

  • @ZakariyaFirachine
    @ZakariyaFirachine 6 месяцев назад

    hey ,can you please provide the training notebook .thanks in advance

    • @deepakjohnreji
      @deepakjohnreji 6 месяцев назад

      Hi, the notebook is not being shared, the research paper has its details for training the model

  • @GeetikaBansal-yu3mx
    @GeetikaBansal-yu3mx 6 месяцев назад

    Hi, quick question: i had trained the model like you suggested. but when i loaded the best model and tested it on few docs, its returning the docs only instead of the entity. Can you suggest why this would be the case

    • @deepakjohnreji
      @deepakjohnreji 6 месяцев назад

      Hi, have you used the model calling code correctly

  • @ozant1120
    @ozant1120 6 месяцев назад

    Works great, but have a question. How can i calculate the metrics precision recall f1 accuracy scores

  • @hayarmen2807
    @hayarmen2807 6 месяцев назад

    Good afternoon! Tell me, please, have you published a file with the training of the model? I really like your work and I want to develop in this field!

    • @deepakjohnreji
      @deepakjohnreji 6 месяцев назад

      Hi, Thank you for watching, I haven't published the code files yet, the research paper has the details of the model: journals.plos.org/digitalhealth/article?id=10.1371%2Fjournal.pdig.0000152

  • @PreetiS-x3u
    @PreetiS-x3u 7 месяцев назад

    Hello, I ran the codes and trained the model on the entire dataset, but when I run the inference code, the predictions are empty. Any idea why? Could it have anything to do with the fact that I don't have pytorch_model.bin in my model folder, but model.safetensors instead?

  • @suen-tech
    @suen-tech 7 месяцев назад

    Thx

  • @jodybb2151
    @jodybb2151 7 месяцев назад

    Im just here to be nosy.. his man is intelligent and fine. Ok byee

  • @ravinatarajan4894
    @ravinatarajan4894 8 месяцев назад

    Thanks Deepak for your guidance. I was struggling trying to get LocalDocs work with GPT4All and your method of using CSV files works better than LocalDocs. I have a question on further enhancing this. How do I get the model to do the following: 1) List more than one result? I tried k=4, but that did not give me any more than 1 result. 2) How do I get the model to summarize information? Example, I want it do simple things like 'How many novels were written by Tagore?' from a list of his works. Thanks in advance for your additional help.

  • @dailytalkingpaper
    @dailytalkingpaper 8 месяцев назад

    the model you created is it based on the whole datasets?

  • @abhiksarkar3859
    @abhiksarkar3859 8 месяцев назад

    Very useful work. But i am getting AttributeError: 'DataFrame' object has no attribute 'append'. can you pls recheck/update the code?

  • @kitanomegumi1402
    @kitanomegumi1402 9 месяцев назад

    This is a good model that I've been using for my course project for some time. Your work is very much appreciated!

  • @shanmuganathanramalingam771
    @shanmuganathanramalingam771 9 месяцев назад

    Hi Brother How to annotate automatically for large text data by this i have to do it mannually

  • @transform2532
    @transform2532 10 месяцев назад

    Hey, great work dude! I am wondering where can i access this Named Entity Spacy Tagger @ 1:46 Thank you

    • @deepakjohnreji
      @deepakjohnreji 10 месяцев назад

      Thank you, That repo is down, unfortunately.

  • @GouravKumar-qi5gt
    @GouravKumar-qi5gt 11 месяцев назад

    sir, I am note able to import huggingsound, Please help

    • @GouravKumar-qi5gt
      @GouravKumar-qi5gt 11 месяцев назад

      its showing the following error :- ERROR: Ignored the following versions that require a different python version: 0.0.1 Requires-Python >=3.7,<3.10; 0.1.0 Requires-Python >=3.7,<3.10; 0.1.1 Requires-Python >=3.7,<3.10; 0.1.2 Requires-Python >=3.7,<3.10; 0.1.3 Requires-Python >=3.7,<3.10; 0.1.4 Requires-Python >=3.7,<3.10; 0.1.5 Requires-Python >=3.7,<3.10 ERROR: Could not find a version that satisfies the requirement torch!=1.12.0,<1.13.0,>=1.7 (from huggingsound) (from versions: 2.0.0, 2.0.1, 2.1.0) ERROR: No matching distribution found for torch!=1.12.0,<1.13.0,>=1.7

    • @deepakjohnreji
      @deepakjohnreji 11 месяцев назад

      @@GouravKumar-qi5gt based on the error, it seems you are not using compatible python version, could you upgrade and check

    • @srisir481
      @srisir481 11 месяцев назад

      guru@@deepakjohnreji , please play input and output sounds

  • @abhiishekbhol
    @abhiishekbhol 11 месяцев назад

    The issue with this is, if we want the openai to set I don't know the answer if some irrelevant question is asked, it still answers. How to implement that

  • @filicefilice
    @filicefilice 11 месяцев назад

    ❤❤❤

  • @32BitIndian
    @32BitIndian Год назад

    Does this work on intranet environment

  • @vasanthpragash854
    @vasanthpragash854 Год назад

    I really like the second cluster. Its funny to see twitter at its worst.

  • @gerardjacobs8900
    @gerardjacobs8900 Год назад

    "promosm" ❤️

  • @DefamsTV
    @DefamsTV Год назад

    Is it required valid openain credential?

    • @deepakjohnreji
      @deepakjohnreji Год назад

      Yes, you need to have open ai credentials for this tutorial

    • @DefamsTV
      @DefamsTV Год назад

      @@deepakjohnreji so what is the position of GPT4All in this tutorial. I thought GPT4All as a openapi replacement

    • @deepakjohnreji
      @deepakjohnreji Год назад

      @@DefamsTVHere GPT4All embeddings are used, if you want to try out completely open source implementation please check out this tutorial ruclips.net/video/1cx3wOhisTg/видео.htmlsi=opSsQCyhj0kg56iE

  • @saurabhjha9817
    @saurabhjha9817 Год назад

    GPT4ALLEmbedding() throwong error "GGML assert "

    • @deepakjohnreji
      @deepakjohnreji Год назад

      github.com/imartinez/privateGPT/issues/428 could you check your system specification, and whether its supporting the mode loading

    • @saurabhjha9817
      @saurabhjha9817 11 месяцев назад

      I am using huggingfacedmbedding for now

  • @totoedgar7487
    @totoedgar7487 Год назад

    Thanks you for the amazing tuto. Can we annotat severals data in the same tIME ?

  • @ren417
    @ren417 Год назад

    Excellent tutorial!!! It helped me to learn the custom NER, which otherwise looks difficult to follow in the spaCy documentation.

    • @deepakjohnreji
      @deepakjohnreji Год назад

      Thank you so much :)

    • @Raaj_ML
      @Raaj_ML 3 месяца назад

      Yes, Spacy documentation is poor

  • @virgoinfilm
    @virgoinfilm Год назад

    Such a lovely man

  • @virgoinfilm
    @virgoinfilm Год назад

    the real ones are here for a reason.

  • @awesomenoone8888
    @awesomenoone8888 Год назад

    Good one,, i m trying to build the knowledge graph using this technique, but have got stuck into it. Would you please suggest me how to tackle it? 1- how to have the 2 edges from the same source node to destination node?I mean I have tried all possible ways best of my knowledge to build more than one transition edge from same source node to the same destination node in the same direction. 2- how to identify all the possible paths from the initial node to the final node, when there's a KG(knowledge graph) is available.

  • @apekaboom6241
    @apekaboom6241 Год назад

    great video, I have a question tho let's say i trained a model with a TRAIN_DATA of 300 texts, now i have 200 more texts to train because the model was not accurate. is there a possibility to just train the same model with these 200 new texts or should i train a new model with all the 500 texts(it will take a long time)? if there is a way how pls ^^

    • @deepakjohnreji
      @deepakjohnreji Год назад

      Thanks, You could try training the model again on top of the 300 data sample model, I would say test that approach, if its not working out then better train with complete dataset again :)

  • @koushikkumardey6069
    @koushikkumardey6069 Год назад

    Bring more of these amazing people loving these unscripted podcasts nowadays.

  • @shwetabhat9981
    @shwetabhat9981 Год назад

    Great content sir . Thank you so much . Learnt a lot from this one . Keep growing 🎉

  • @subhashachutha7413
    @subhashachutha7413 Год назад

    how can i access the code?

  • @mukeshkund4465
    @mukeshkund4465 Год назад

    Very good one.. i have a special use case.. let me know how to connect with you do discuss the usecase

    • @deepakjohnreji
      @deepakjohnreji Год назад

      Hi, you can reach out to me via my Linked In or email

  • @EricCantori
    @EricCantori Год назад

    Great tutorial!!! Very concise.

  • @karthikb.s.k.4486
    @karthikb.s.k.4486 Год назад

    Nice where we can see the code for above

    • @deepakjohnreji
      @deepakjohnreji Год назад

      I have uploaded the files on my GitHub page github.com/dreji18/GPT4ALL-Langchain

  • @NILESHNANDANTS
    @NILESHNANDANTS Год назад

    ValueError("[E024] Could not find an optimal move to supervise the parser. Usually, this means that the model can't be updated in a way that's valid and satisfies the correct annotations specified in the GoldParse. For example, are all labels added to the model? If you're training a named entity recognizer, also make sure that none of your annotated entity spans have leading or trailing whitespace or punctuation. You can also use the `debug data` command to validate your JSON-formatted training data. For details, run: python -m spacy debug data --help") I am getting this error.......

    • @deepakjohnreji
      @deepakjohnreji Год назад

      I guess it may be the spacy version and its dependencies, could you clean the current spacy and install it again.

    • @NILESHNANDANTS
      @NILESHNANDANTS Год назад

      @@deepakjohnreji Thank you Reji......but u taught well tho....:)

  • @hmmmmn6770
    @hmmmmn6770 Год назад

    I have this thing as my training data drive.google.com/file/d/1ssBswos2TAh8OTpcdTz7iDNqU2jCti7V/view?usp=drivesdk How to train now?

    • @deepakjohnreji
      @deepakjohnreji Год назад

      I have requested for access to your training data.

    • @deepakjohnreji
      @deepakjohnreji Год назад

      I could access it now, please give more context about this data

    • @hmmmmn6770
      @hmmmmn6770 Год назад

      @@deepakjohnreji you have to train your agent in such a way that when someone gives input from text1 and text 2 the agent should indicate the relevancy of the given sentences between 0&1 (0 if the sentences doesn't match and 1 if both the sentences are equal). I used spacy to do that but it was manual for example I used to manually write sentences and then used to check the accuracy of the two sentences. I never trained the algorithm to do that.

    • @deepakjohnreji
      @deepakjohnreji Год назад

      @@hmmmmn6770 This is a similarity check use case, for this you can use any of the embedding model and run similarity on it.

  • @ganeshmittal1304
    @ganeshmittal1304 Год назад

    I am getting too many errors when I run the model_training code. I have tried running it on Google Colab, but I still cannot get any results. Can you please help me?

  • @kunamgetar
    @kunamgetar Год назад

    Salam Mr. Deepak John Reji, i've tried to follow your video step by step, but when i reach the step 5 - run the training code i had a error massage "TF-TRT Warning : could not find TensorRT" , I have tried so many ways on the internet but until now I still haven't found the right one, can you help me, oh yes, I used google colab to do this coding.

    • @deepakjohnreji
      @deepakjohnreji Год назад

      Could you install spacy library again and try? In colab you shouldn't be getting these sorts of errors. Maybe opening a new kernel would help you fix the issue.

  • @Jxxxxxxxxxxxxxxxxxxx
    @Jxxxxxxxxxxxxxxxxxxx Год назад

    can this approach be useful to group google reviews into categories

  • @patandadakhalandar
    @patandadakhalandar Год назад

    Great work. Kindly provide the training notebook, codes on how to train the model. Thanks in advance