How to Deploy a Tensorflow Model to Production

Поделиться
HTML-код
  • Опубликовано: 9 ноя 2024

Комментарии • 196

  • @atikkhatri6942
    @atikkhatri6942 7 лет назад +1

    I dived into the world of ML using scikit-learn and now I am learning the Tensorflow. I searched alot about the deployment of models, but I am having a hard times understanding the whole meachnaism. I really appreciate your effort, this is the best content on ML deployments on RUclips 👍🏻

  • @radosccsi
    @radosccsi 7 лет назад +11

    I made a model in Keras. Installed Keras and Tensorflow on AWS instance in Virtualenv and created single python instance listening to RabbitMQ with Pika and used Flask over WSGI to put messages to the queue. HTML client uploads a photo and is returned with ID than it should request id info from the server in one second intervals. Works fine and queuing is kind of bullet proof since it's running on a small cpu instance :)

    • @nourhacker3734
      @nourhacker3734 7 лет назад +1

      Hey rad, sounds very interesting. Where do I learn how to do this?

    • @altairpearl
      @altairpearl 7 лет назад

      rad rabbitMQ. I have heard about it and thought of using it .

    • @SirajRaval
      @SirajRaval  7 лет назад

      very cool

    • @shreyanshvalentino
      @shreyanshvalentino 7 лет назад

      That's awesome!

  • @arjunsinghyadav4273
    @arjunsinghyadav4273 7 лет назад +26

    Hey Siraj, Firstly, great video
    Request: A tutorial on how to build a deployed Deep learning model that learns from live data and updates itself to a new version.

    • @2500204
      @2500204 5 лет назад

      just load the model and do model.fit(new data) and then overwrite the file using model.save() or whatever save function your are using.
      Incremental Learning is the best solution for continuously updating models with new data.

    • @bharatsahu1599
      @bharatsahu1599 4 года назад

      @Shashwat don't you think it will take so much time to retrain with new data included and user won't be waiting till infinity for results.

  • @q-leveldesign5342
    @q-leveldesign5342 7 лет назад

    Thank you, I have been wondering what to do with a model once trained. No one seems to be talking about this and it seems like a very important step. And yes, I have been searching furiously to figure it out. Thanks again.

  • @michaelbell6055
    @michaelbell6055 6 лет назад

    Siraj... my dude, yours are the shoulders I am standing on in my job. Thank you so much for all the incredible tutorials and additional resources!!!

  • @angelomenezes6044
    @angelomenezes6044 7 лет назад

    Man, you are really underrated! You deserve a lot for these great videos about ML. A big thanks from Brazil for the awesome work!!!

  • @jijojohn5168
    @jijojohn5168 7 лет назад +35

    Long story short siraj earned around 864.84 dollars for this month lol go to 35:40.. He deserves lot more.. Keep up the good work.

    • @stephk8316
      @stephk8316 7 лет назад +1

      jijo john not bad for a side job, and well deserved!

    • @tamgaming9861
      @tamgaming9861 7 лет назад +6

      He deserves a lot more - i wish him the best!

    • @SirajRaval
      @SirajRaval  7 лет назад +19

      ha! that slipped through. cool. i'll keep it there. transparency ftw

    • @chicken6180
      @chicken6180 7 лет назад

      i mean, does he not deserve it?

    • @theempire00
      @theempire00 7 лет назад

      Damn, imagine what those youtubers with millions of followers earn...

  • @vijayabhaskar-j
    @vijayabhaskar-j 7 лет назад +1

    I always wondered "Ok, I created a model, now what?". Thanks, Siraj!

  • @adamyatripathi2743
    @adamyatripathi2743 7 лет назад +76

    His notebook is Untitled... He chose the dark path....

    • @SirajRaval
      @SirajRaval  7 лет назад +11

      renamed it to demo now, so much more content coming

    • @adamyatripathi2743
      @adamyatripathi2743 7 лет назад +4

      Siraj Raval Your videos are good! May the force be with you...

    • @breakdancerQ
      @breakdancerQ 5 лет назад

      @@adamyatripathi2743 Naming notebooks is for noobs

  • @andresvourakis6880
    @andresvourakis6880 7 лет назад

    Your explanation was on point!! Thank you Siraj

  • @KelvinMeeks
    @KelvinMeeks 6 лет назад +1

    Siraj, excellent tutorial - thanks for creating this.

  • @mercolani1
    @mercolani1 6 лет назад

    Loved the video, love the energy, he clearly has a deep understanding

  • @aug_st
    @aug_st 7 лет назад

    Very useful. Thanks Siraj!

  • @AbhishekKrSingh-ls5xu
    @AbhishekKrSingh-ls5xu 7 лет назад +3

    Hey Siraj, Firstly, great video
    Request: Can u post a tutorial on tensorflow distributed training on GPUs and Kubernates.

  • @svin30535
    @svin30535 7 лет назад

    Great topic! Thanks Siraj.

  •  7 лет назад +1

    Love your teaching :) Keep it up☺

  • @ttwan690
    @ttwan690 7 лет назад

    May the force be with you

  • @abhiwins123
    @abhiwins123 7 лет назад

    Thanks for end 2end tensor flow tutorial. World wows you for AI revolution

  • @genricandothers
    @genricandothers 7 лет назад

    I barely ever comment on videos but I have got to show love for all I've learned on your channel. I've been recommending you to everyone I can find. What software do you use to do the screen background with you in the foreground by the way? I want to start a channel teaching atmospheric science and I like this style...

  • @theempire00
    @theempire00 7 лет назад +8

    24:18 When I run the command:
    'docker build --pull -t $USER/tensorflow-serving-devel -f tensorflow_serving/tools/docker/Dockerfile.devel .'
    I get an error:
    'invalid argument "/tensorflow-serving-devel" for t: invalid reference format'
    Help? (On Windows 7, Docker Toolbox)
    UPDATE: The following does work:
    'docker build --pull -t tensorflow-serving-devel -f tensorflow_serving/tools/docker/Dockerfile.devel .'

  • @ShepardEffekt
    @ShepardEffekt 7 лет назад

    Was waiting for this

  • @Oneillphotographyithaca1
    @Oneillphotographyithaca1 6 лет назад

    So cool! This is inspiring me to make some models. :)

  • @bibhu_pala
    @bibhu_pala 6 лет назад +1

    To build the docker file use :
    sudo docker build --pull -t $USER/tensorflow-serving-devel -f tensorflow_serving/tools/docker/Dockerfile.devel .
    To run :
    sudo docker run --name=tensorflow_container -it $USER/tensorflow-serving-devel

  • @Hustada
    @Hustada 7 лет назад

    Thanks for sharing this. I've been wondering how to do this.

  • @xPROxSNIPExMW2xPOWER
    @xPROxSNIPExMW2xPOWER 7 лет назад

    lol need this in about two weeks thanks for a dank upload siraj!!!! really hope I dont run into that docker problem you had, I have over 20 docker images I think. lol 27:00 building custom linux kernels amirite lol

  • @afshananwarali9462
    @afshananwarali9462 6 лет назад

    Thanks for this. It works for me.

  • @deepanshuchoudhary4598
    @deepanshuchoudhary4598 4 года назад

    Come back buddy, we miss you!

  • @larryteslaspacexboringlawr739
    @larryteslaspacexboringlawr739 7 лет назад

    thank you for tensorflow video

  • @shivajidutta8472
    @shivajidutta8472 7 лет назад

    I think an alternate would be deploy the models in your code directly rather than calling a rest API. I have a model running on my iPhone, I don't see performance issues. The new chipsets are getting more and more powerful.

  • @AlienService
    @AlienService 7 лет назад

    Thank you for these. I've learned a lot already. The big question and use case that I'm interested in is using ML in blender. The goal would be to create a blender add on that could be trained on and manipulate mesh in a character model. With Blender and its add ons all written in python, this seems doable. The mesh data can be called within the blender python api pretty easily. My question is how to best set up a system that would take a character mesh (this would be in the thousands of vertex coordinates) and then train a model on with shape keys for happy in each one, then be able to make a shape key on a new character mesh that also produces a happy expression.

  • @ProfessionalTycoons
    @ProfessionalTycoons 6 лет назад

    very good video

  • @tonydenion3557
    @tonydenion3557 7 лет назад

    Nice vid man ! Did you like C (didnt see any vids about it :D).
    I would like to know more about Tensorflow C API.
    Thanks alot for all knowledge you share

    • @cameronfraser4136
      @cameronfraser4136 7 лет назад +1

      My understanding is the tensorflow C api wasnt designed to be used for production directly. If you want to deploy a model in C/C++ consider writing it from scratch, its not as bad as it sounds (inference is much simpler than training). Deep networks are mostly just a series of matrix multiplies.

    • @SirajRaval
      @SirajRaval  7 лет назад +2

      more tf vids coming thx

    • @tonydenion3557
      @tonydenion3557 7 лет назад

      ty for answer, world gonna change thanks to guys like you ;)

  • @JabaBanik
    @JabaBanik 7 лет назад

    This is amazing, thanks Siraj. Since we are talking about production level can you plz suggest server configuration required for Tensorflow serving?

  • @kevinwong322
    @kevinwong322 6 лет назад

    such a helpful video!

  • @harshmunshi6362
    @harshmunshi6362 7 лет назад +2

    I guess you have shared enough knowledge for someone to start a company :/

  • @shreyanshvalentino
    @shreyanshvalentino 7 лет назад +2

    the only useful video that you have uploaded till date!

    • @SirajRaval
      @SirajRaval  7 лет назад +3

      thx what else would be useful?

    • @shreyanshvalentino
      @shreyanshvalentino 7 лет назад

      I was probably too excited when I typed that, hence the exaggeration !
      You probably don't want to have suggestions from a crappy coder, like me
      However as much as I love your other tutorial videos, which are informative too, but are restricted to jupyter notebooks
      There is no way to send across the information processed from that to anywhere which a common person can use
      I started learning Django and rabbitMQ, with thoughts that only it can provide an interface to tensorflow

    • @shreyanshvalentino
      @shreyanshvalentino 7 лет назад +1

      Also I am not sure if we have used the mnist - numerical recognition classifier in your docker
      Why did we not use that and instead use inception?
      Edit - no need to answer, got answered at 29:48

    • @MrKemusa
      @MrKemusa 7 лет назад

      Something else that could be useful if you can make videos that showcase how to tailor out of the box tutorials (e.g. the MNIST tutorial) to a completely different use case where there model is still useful (e.g. something with a dataset we've built from scratch). Sometimes there's friction going from these templates to your own use case. Eventually I figure it out but I would be nice to have key things to consider when going from one use case to the next.

    • @Neonb88
      @Neonb88 5 лет назад

      If you want more detailed tutorials, look at Melvin L. He's really good with step-by-step solutions

  • @wasimnadaf11
    @wasimnadaf11 5 лет назад

    super informative:)

  • @xtr33me
    @xtr33me 7 лет назад +2

    Thanks so much for this vid! Could you by chance in the future do the same thing, but for something custom like a tensorflow model that simply adds two floats and returns the response? Reason I ask is because I have been having a big problem trying to figure out how to setup a custom model for serving with regards to configuring the proto files and client.

  • @moelgendy_
    @moelgendy_ 7 лет назад

    Great video, Siraj! Could you add resources on how to deploy Keras models?

  • @eliassocrates338
    @eliassocrates338 7 лет назад

    Siraj, could you please upload weights of models you trained as well, as neither online and personalized training of models is a viable option financially.

  • @afshananwarali9462
    @afshananwarali9462 6 лет назад

    Please share link of part 2 of this tutorial for pushing this to cloud.

  • @igorpoletaev8188
    @igorpoletaev8188 7 лет назад +1

    I was very surprised by the fact that bazel have been building my custom client for serving for a very long time ...Does it need to compile so many sources every time when I change the client code?

  • @phurien
    @phurien 7 лет назад

    Hey Siraj, Love the videos. Question: I am taking the Udacity DL course, and am getting more and more into it and plan to continue on to make a career out of this. Would you recommend I switch over to Ubuntu as my primary OS or is it feasible to stay in Windows?

  • @kariuki6644
    @kariuki6644 7 лет назад

    Where would i be without you?

  • @CKSLAFE
    @CKSLAFE 6 лет назад +6

    So sad this tutorial is broken now, they changed the github repository. Now you don’t have the tensorflow folder inside serving. If anybody knows of a tutorial please let me know.

    • @afshananwarali9462
      @afshananwarali9462 6 лет назад +1

      There is no tensorflow folder inside of serving on github. What should I do?

    • @lakrounisanaa9156
      @lakrounisanaa9156 6 лет назад

      hi what do you do in this case i face the same issue

    • @iulia2190
      @iulia2190 6 лет назад

      try to build in serving directory

    • @jenlee6693
      @jenlee6693 6 лет назад

      You can do 'bazel build -c opt tensorflow_serving/...' at tensorflow-serving directory in docker container.

    • @FZ8Yamaha
      @FZ8Yamaha 6 лет назад

      According to github.com/tensorflow/serving/issues/755 , looks like we can just skip the cd tensorflow and ./configure steps

  • @machartpierre
    @machartpierre 7 лет назад

    Hey Siraj! Thanks a lot for all this amazing content.
    I am working on generative models for symbolic (MIDI) music sequences. Your videos on the topic have been very useful.
    However, I'm intending on running the inference / generation part on mobile device (iOS). I am using TensorFlow and things seem to gradually improve (more functions, more support, more documentation) but I still find it very tricky to port the model on device (strip the unused / unsupported nodes, optimize, porting the generation scripts etc.).
    Even porting the fairly simple RBM model you used for one of your videos is challenging. Any suggestion on that?
    Given that running inference on mobile devices is becoming a trend, would you care to make a video about it?

  • @themakeinfo
    @themakeinfo 7 лет назад +2

    Hi @siraj, Could you please tell How to Deploy a Keras Model to Production?

  • @bibhu_pala
    @bibhu_pala 6 лет назад +1

    #update
    The tensorflow submodule has been removed. You should no longer have to run TensorFlow's configure script manually

  • @st0ox
    @st0ox 5 лет назад

    "we have to deal with C++" count me in :DD

  • @thoughtsmithinnovation5432
    @thoughtsmithinnovation5432 7 лет назад

    Hi Siraj, you mentioned at 28:00 that inception has 100s of layers. If I am not wrong presently it has only 48 layers. Please correct me if I am wrong or you are referring something else.

  • @gattra
    @gattra 5 лет назад +1

    Please rehearse more and these would be 10000% better

  • @Superjeka1979
    @Superjeka1979 7 лет назад

    Hi Siraj, nice video! But I'm a bit confused about classification_signature and predict_signature in MNIST example. Should I use both of them, is there any difference between them, why classification's input is a string, etc. Or it's just example that I can use number of signatures to query single model?
    Thank you.

  • @theophilusananias1416
    @theophilusananias1416 7 лет назад

    Siraj, Please, put together a video tutorial on how to generate an Image from Text with TensorFlow. (Text to Image)

  • @rociogarcialuque6988
    @rociogarcialuque6988 4 года назад +3

    "If Google can use it, we can use it." is so 2017.

  • @chicken6180
    @chicken6180 7 лет назад

    ok ive been convinced.... i will stop being a stubborn js scrub... *sigh* welp time to learn tf

    • @SirajRaval
      @SirajRaval  7 лет назад

      i made a js video called evolutionary tetris AI last week! check it out

    • @chicken6180
      @chicken6180 7 лет назад

      i know, i saw it. but as the majority of videos are in python it's working against me to be stubborn and not use that mainly

  • @LeksaJ4
    @LeksaJ4 7 лет назад

    Hi Siraj, thank you so much for the videos. bazel build failed on some error and I am gonna try it tomorrow (it might be problem with not enough memory for docker). However I am kinda lost with docker and containers. Now when I shut it down, how do I get back to the step where I can write bazel bild etc..? Thank you.

  • @abdelhaktali
    @abdelhaktali 6 лет назад

    Hi Siraj
    I have trained the keras model using imagdedatagenerator and flow_from_directory. When I deploy in tensorflow servimg i got wrong class due to shuffle true in flow_from_directory. How can i resolve this problem ?
    Thanks

  • @MrSanselvan
    @MrSanselvan 6 лет назад

    @Siraj : Can we train the models and deploy them Incremental ?. Is TF Serving supports multiple smaller models. If yes, how can we do it. I cannot get any help in internet.

  • @sig7813
    @sig7813 4 года назад

    If I use a saved scaler function from sklearn for the input data - can that be loaded to the server along with the model?
    Basically before model is called - i have to use that function first on every input.
    I had to use a scaler since i have many inputs and they are very different : one can be in a range of 1-3, another 50000-1000000. For that i used StandardScaler from sklearn and it does great. In case of getting right prediction i have to apply it on the new coming data.

  • @sathyasarathi90
    @sathyasarathi90 7 лет назад +1

    Siraj, I wonder if a similar strategy can be used to deploy a sci-kit learn model?

    • @SirajRaval
      @SirajRaval  7 лет назад +2

      absolutely loads.pickle.me.uk/2016/04/04/deploying-a-scikit-learn-classifier-to-production/

  • @simpleman5098
    @simpleman5098 7 лет назад

    Hey Siraj, what software do you use to make those images like on 2:34 or 11:46 etc?

  • @heathervica1108
    @heathervica1108 7 лет назад

    Awesomeeeeee.
    Hello guys, do you know if is possible using:
    • Variational Autoencoders Neural Network (VAE) or
    • Generative adversarial networks (GANs)
    For structured data? I have seen some examples and it could be used but just for unstructured data such as images, audio, etc. Maybe do you have any example with structured data? Thanks a lot

  • @vibhanshusharma3150
    @vibhanshusharma3150 7 лет назад

    Any video on image localisation

  • @drdeath2667
    @drdeath2667 4 года назад

    cardigan lol. inception network is savage

  • @yashsrivastava677
    @yashsrivastava677 7 лет назад +2

    How can one do incremental training of models already deployed to serving?

    • @debu2in
      @debu2in 4 года назад

      I think once you have accumulated the data, you can wrap the phases of the model training steps in functions then those functions in a class and trigger the class to train the model, persist the model on the disk and save the path in db, atleast this is how I do it :)

  • @fabregas1291
    @fabregas1291 7 лет назад

    Hi, How could we use this approach of deploying a TensorFlow model to production, for a re-trained inception model using transfer learning?

  • @pietart3596
    @pietart3596 6 лет назад

    Stupid question: Are we using the MNIST model? Since we're using the ImageNet model right?

  • @alexp5693
    @alexp5693 7 лет назад

    Hello. I hope you will answer as it's really important for me. I'm currently working on a project and my task is to generate meaningful unique text from a set of keywords. It doesn't need to be large, at least a couple of sentences. I'm pretty sure I have to use LSTM but I can not find any good examples of generation of meaningful texts. I saw a few of randomly generated but that's all. I would be grateful for any advice. Thank you in advance.

  • @subhankarbhattacharya2940
    @subhankarbhattacharya2940 4 года назад

    The day he can show proficiency in linear Algebra and differential equations etc, I would consider him to be a data scientist .. otherwise it’s all smartness practiced with code available in public

  • @saitaro
    @saitaro 7 лет назад

    Siraj, if I wanna write an ML algorithm and make a web app based on it, would learning Django be useful for this task?

  • @bhisal
    @bhisal 6 лет назад

    What’s the advantage of serving model using TF serving compared to a rest api

  • @akashtripathi5947
    @akashtripathi5947 7 лет назад

    Can you please explain how I can make and serve CNN model using deeplearning 4j in java ?

  • @lotfiraghib7029
    @lotfiraghib7029 7 лет назад

    Hello Siraj, Firstly thank you for this great video. I train a model in Python, than i saved with the train.saver to generate my checkpoint. i want to load this model in C++ , is there a way to do that ????

  • @jenlee6693
    @jenlee6693 6 лет назад

    after uncompress the inception model, do --> 'bazel-bin/tensorflow_serving/example/inception_saved_model --checkpoint_dir=inception-v3 --output_dir=inception-export' as the command on the tutorial is old and no longer works.

  • @600baller
    @600baller 7 лет назад

    If I have an existing tf model, and I trained my data with train_test_split, what to do if I want to see the predictions for my model on the entire dataset (including the original training and testing data)?

  • @harshitagarwal5188
    @harshitagarwal5188 7 лет назад

    We wait for "How to tune hyperparameters"?

  • @hussain5755
    @hussain5755 7 лет назад

    Siraj can you please please recommend me a book to get start on ML, your videos are great but I am having hard time in grasping the concept

  • @AaronSarkissian
    @AaronSarkissian 7 лет назад +1

    I don't get this part: 32:08 How that bazel command worked out of the docker?

    • @prarthana1122
      @prarthana1122 6 лет назад

      same question...the bazel command didnt work in my docker too..How did he do ..could you please tell us siraj

  • @adesojialu1051
    @adesojialu1051 3 года назад

    i am working o n image classificattion and my model is in tflite, how do i deploy? do i need to change anything in your video tutorial?

  • @matrixzoo8434
    @matrixzoo8434 6 лет назад

    Does this mean that in order to make an ML web app I don't have to learn Django or any other python web framework, I could just use tensorflow?

  • @EpicMicky300
    @EpicMicky300 5 лет назад

    what's the difference between a docker image and a simple executable file?

  • @justinviola2479
    @justinviola2479 5 лет назад

    How can we take that JSON output and have it display bounding boxes in the browser?

  • @MrKemusa
    @MrKemusa 7 лет назад

    How would one go from building tensorflow in docker on a local CPU without CUDA support and then deploying the container to a GPU instance in the cloud with CUDA support? Would I need to build tensorflow again when I deploy the docker container to the GPU and just enable CUDA support there? Or is there a way to have CUDA support on my CPU and maintain that when I deploy the container?

  • @jenlee6693
    @jenlee6693 6 лет назад

    There is no /tensorflow folder to do 'configure' as Google has taken it out. It is no longer required to do the configure according to Google latest issue response. Just do 'bazel build -c opt tensorflow_serving/...' at tensorflow-serving directory. (of course without the ')

  • @Vijaykumar-jx8jq
    @Vijaykumar-jx8jq 5 лет назад

    Hey siraj, actually i want to know that i have created a image classifier in docker and now i want to integrate into system which is written in python, how i can do that?

  • @wahi_wahi
    @wahi_wahi 6 лет назад

    When I run "bazel build -c .." ,
    I get "no targets found beneath' tensorflow_serving' ".

  • @captainwalter
    @captainwalter 4 года назад

    I honestly dont get how to employ the model. At what stage do we use the neural net to make decisions about actionable data, in this case see it decode the words?

  • @udaysah8038
    @udaysah8038 6 лет назад

    I am currently facing a problem to deploy my custom models where my images data is located on my local computer, can u make a video to how to deploy custom models where image data is located in the local computer, save models and deploy for in android devices.

  • @OttoFazzl
    @OttoFazzl 6 лет назад

    Someone should invent Keras for Tensorflow Serving

  • @Gerald-iz7mv
    @Gerald-iz7mv 6 лет назад

    how can you upload new models at runtime?

  • @sandhyakale9054
    @sandhyakale9054 4 года назад

    Why we want to train the model.. I want deploy in website my chatbot.. Can you tell me

  • @bhushanvernekar5121
    @bhushanvernekar5121 7 лет назад

    i am not able to find step by step procedure to how to work on tensorflow in android studio

  • @audi88
    @audi88 7 лет назад

    You look like the smart version of Abhishek Bacchan.

  • @bilalchandio1
    @bilalchandio1 3 года назад

    I am having issue while deploying my deep learning model in h5 format on flask. It works fine on local machine however, it has issues on my pythoneverywhere hosting server.

  • @souuu42
    @souuu42 6 лет назад

    the process crashes when i try to create the docker image, it goes on for about 10 minutes and then everything freezes. any idea why ? i have an intel i5 processor

  • @johnnychan6755
    @johnnychan6755 7 лет назад +1

    Has anyone got an error like this, at the bazel build step? (run on Macbook Pro, OSX 10.11.6, via Docker method. With bazel 0.5.4 in Dockerfile)
    ERROR: /root/.cache/bazel/_bazel_root/f8d1071c69ea316497c31e40fe01608c/external/org_tensorflow/tensorflow/core/kernels/BUILD:2904:1: C++ compilation of rule '@org_tensorflow//tensorflow/core/kernels:conv_ops' failed (Exit 4).
    gcc: internal compiler error: Killed (program cc1plus)

    • @johnnychan6755
      @johnnychan6755 7 лет назад

      Solved! See this GitHub issue thread -
      scroll down. github.com/tensorflow/serving/issues/227

    • @charlesaydin2966
      @charlesaydin2966 7 лет назад

      Thanks a lot!

  • @RowdyReview
    @RowdyReview 5 лет назад

    check the below video for
    How To Train an Object Detection Classifier Using TensorFlow 1.12 on Windows 10 ---- latest one
    ruclips.net/video/nZUxoHPFf4w/видео.html

  • @zoranrazarac
    @zoranrazarac 7 лет назад

    bazel-bin/tensorflow_serving/example/inception_export: No such file or directory
    Now what?

  • @adesojialu1051
    @adesojialu1051 3 года назад

    pls can i have a copy of your pipeline or pls how do i do mine?

  • @RowdyReview
    @RowdyReview 5 лет назад

    Hi Siraj, Thanks for great video.
    please help me out to fix the issue,I have my own model. here i am using faster_rcnn_inception_v2_pets.config architecture. currently i have trained check points.
    But when ever i am exporting checkpoints by using below command
    bazel-bin/tensorflow_serving/example/inception_saved_model --checkpoint_dir=my-model6 --export_dir=inception-export
    at that time i am getting below error
    DataLossError (see above for traceback): Unable to open table file my-model6/model.ckpt-21292: Data loss: not an sstable (bad magic number): perhaps your file is in a different file format and you need to use a different restore operator?
    [[Node: save/RestoreV2_34 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2_34/tensor_names, save/RestoreV2_34/shape_and_slices)]]
    Here we have TF=1.4 and Bazel=0.5.4
    while training i got checkpoints like
    model.ckpt-21292.data-00000-of-00001
    model.ckpt-21292.meta
    model.ckpt-21292.index
    for the above checkpoints i was renamed like model.ckpt-21292.
    I was followed your video, your downloading pre-trained model.
    but my question is we both having the same type of checkpoints, then why am getting above error??
    Thank you

    • @RowdyReview
      @RowdyReview 5 лет назад

      I found solution.,.,.,.
      Hello all, just follow the below video and export your own model with in a 10 seconds
      ruclips.net/video/w0Ebsbz7HYA/видео.html