for anyone who noticed that not all images are getting moved to its respective folders, and that later it gives some error that a file is missing: make sure all your images end with .jpg instead of .jpeg since all the scripts on the github page only takes into account those files
Hey all, just a quick update (April 10 2024). I ran through the full notebook today with my coin dataset, and everything worked without errors. A "Restart Session" option appears during Step 1 after running the last set of install commands. When it appears, click the "Restart Session" option, and then keep working through the steps. If you're getting errors, try using my coin dataset and seeing if it works (it should). Then, compare your annotation files with the annotation files from the coin dataset. It's likely there's a difference in your annotation files that are causing problems.
Hello Edje, thanks for this great tutorial! While following exactly the steps in your notebook, I ran into an error repetitively when running model_builder_tf2_test.py: "Could not load dynamic library 'libcudart.so.11.0'; dlerror:.." Any idea of how to solve it?
On Step 5 Training the model there's this another error that most of us are getting now, can you try checking it out ? The error goes like this, "TensorFlow Addons (TFA) has ended development and introduction of new features. TFA has entered a minimal maintenance and release mode until a planned end of life in May 2024. Please modify downstream libraries to take dependencies from other repositories in our TensorFlow community (e.g. Keras, Keras-CV, and Keras-NLP)." Followed by some errors about not identifying variables like {num_steps}, pipeline_file and model_dir that we indeed declared just some snippets ago, really frustrating error
Hello, to increase performance, you can use multithreaded computing. To do this, when loading the model, specify the "num_threads" argument, which must contain the number of threads that the processor supports. Basically, TFLite uses only one core. The code: import multiprocessing tf.lite.Interpreter(model_path=PATH_TO_CKPT, num_threads=multiprocessing.cpu_count() ) On my old laptop with dual-core CPU, this gave a double increase in performance. Basic - 140 ms per frame. With two cores - 85 ms per frame
@@fe_nik_s776 you mean like this? import multiprocessing tf.lite.Interpreter(model_path=PATH_TO_CKPT, num_threads=multiprocessing.cpu_count() ) interpreter = Interpreter(model_path=modelpath)
@@EdjeElectronics There are a few persistent warnings in the notebook now because of which it does not run anymore. Kindly check the comment sections and reply. The training does not commence in step 5. It worked once when we tried the notebook on a smaller dataset but then now we are trying to deploy the full dataset it shows this issue and does not run.
@EdjeElectronics, I had a bug whereby on step 5, the training stops about a minute :( I practically did everything according to the tutorial. The code ends with a ^C symbol. Also, I noticed that the system RAM skyrocketed to 12.1Gb before the training process stopped... Could this be the issue? I would be really grateful if you could provide me some guidance please.
Your images are too large and you're running out of memory. Resize them to be around 200-300KB, don't upload too many images, and you should be good to go
I got an error about labelmap.pbtxt at first so i created it myself The code gets stuck in the training part /usr/local/lib/python3.10/dist-packages/tensorflow_io/python/ops/__init__.py:98: UserWarning: unable to load libtensorflow_io_plugins.so: unable to open file: libtensorflow_io_plugins.so, from paths: ['/usr/local/lib/python3.10/dist-packages/tensorflow_io/python/ops/libtensorflow_io_plugins.so'] caused by: ['/usr/local/lib/python3.10/dist-packages/tensorflow_io/python/ops/libtensorflow_io_plugins.so: undefined symbol: _ZN3tsl5mutexC1Ev'] warnings.warn(f"unable to load libtensorflow_io_plugins.so: {e}") /usr/local/lib/python3.10/dist-packages/tensorflow_io/python/ops/__init__.py:104: UserWarning: file system plugins are not loaded: unable to open file: libtensorflow_io.so, from paths: ['/usr/local/lib/python3.10/dist-packages/tensorflow_io/python/ops/libtensorflow_io.so'] caused by: ['/usr/local/lib/python3.10/dist-packages/tensorflow_io/python/ops/libtensorflow_io.so: undefined symbol: _ZNK10tensorflow4data11DatasetBase8FinalizeEPNS_15OpKernelContextESt8functionIFN3tsl8StatusOrISt10unique_ptrIS1_NS5_4core15RefCountDeleterEEEEvEE'] warnings.warn(f"file system plugins are not loaded: {e}") /usr/local/lib/python3.10/dist-packages/tensorflow_addons/utils/tfa_eol_msg.py:23: UserWarning: TensorFlow Addons (TFA) has ended development and introduction of new features. TFA has entered a minimal maintenance and release mode until a planned end of life in May 2024. Please modify downstream libraries to take dependencies from other repositories in our TensorFlow community (e.g. Keras, Keras-CV, and Keras-NLP). For more information see: github.com/tensorflow/addons/issues/2807 warnings.warn( /usr/local/lib/python3.10/dist-packages/tensorflow_addons/utils/ensure_tf_install.py:53: UserWarning: Tensorflow Addons supports using Python ops for all Tensorflow versions above or equal to 2.10.0 and strictly below 2.13.0 (nightly versions are not supported). The versions of TensorFlow you are currently using is 2.8.0 and is not supported. Some things might work, some things might not. If you were to encounter a bug, do not file an issue. If you want to make sure you're using a tested and supported configuration, either change the TensorFlow version or the TensorFlow Addons's version. You can find the compatibility matrix in TensorFlow Addon's readme: github.com/tensorflow/addons warnings.warn( 2023-06-19 11:13:33.517675: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:39] Overriding allow_growth setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0. INFO:tensorflow:Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0',) I0619 11:13:33.530389 139697095886656 mirrored_strategy.py:374] Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0',) INFO:tensorflow:Maybe overwriting train_steps: 20000 I0619 11:13:33.534287 139697095886656 config_util.py:552] Maybe overwriting train_steps: 20000 INFO:tensorflow:Maybe overwriting use_bfloat16: False I0619 11:13:33.534482 139697095886656 config_util.py:552] Maybe overwriting use_bfloat16: False WARNING:tensorflow:From /usr/local/lib/python3.10/dist-packages/object_detection/model_lib_v2.py:563: StrategyBase.experimental_distribute_datasets_from_function (from tensorflow.python.distribute.distribute_lib) is deprecated and will be removed in a future version. Instructions for updating: rename to distribute_datasets_from_function W0619 11:13:33.569274 139697095886656 deprecation.py:337] From /usr/local/lib/python3.10/dist-packages/object_detection/model_lib_v2.py:563: StrategyBase.experimental_distribute_datasets_from_function (from tensorflow.python.distribute.distribute_lib) is deprecated and will be removed in a future version. Instructions for updating: rename to distribute_datasets_from_function INFO:tensorflow:Reading unweighted datasets: ['/content/train.tfrecord'] I0619 11:13:33.572712 139697095886656 dataset_builder.py:162] Reading unweighted datasets: ['/content/train.tfrecord'] INFO:tensorflow:Reading record datasets for input file: ['/content/train.tfrecord'] I0619 11:13:33.572935 139697095886656 dataset_builder.py:79] Reading record datasets for input file: ['/content/train.tfrecord'] INFO:tensorflow:Number of filenames to read: 1 I0619 11:13:33.573024 139697095886656 dataset_builder.py:80] Number of filenames to read: 1 WARNING:tensorflow:num_readers has been reduced to 1 to match input file shards. W0619 11:13:33.573094 139697095886656 dataset_builder.py:86] num_readers has been reduced to 1 to match input file shards. WARNING:tensorflow:From /usr/local/lib/python3.10/dist-packages/object_detection/builders/dataset_builder.py:100: parallel_interleave (from tensorflow.python.data.experimental.ops.interleave_ops) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.data.Dataset.interleave(map_func, cycle_length, block_length, num_parallel_calls=tf.data.AUTOTUNE)` instead. If sloppy execution is desired, use `tf.data.Options.deterministic`. W0619 11:13:33.575423 139697095886656 deprecation.py:337] From /usr/local/lib/python3.10/dist-packages/object_detection/builders/dataset_builder.py:100: parallel_interleave (from tensorflow.python.data.experimental.ops.interleave_ops) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.data.Dataset.interleave(map_func, cycle_length, block_length, num_parallel_calls=tf.data.AUTOTUNE)` instead. If sloppy execution is desired, use `tf.data.Options.deterministic`. WARNING:tensorflow:From /usr/local/lib/python3.10/dist-packages/object_detection/builders/dataset_builder.py:235: DatasetV1.map_with_legacy_function (from tensorflow.python.data.ops.dataset_ops) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.data.Dataset.map() W0619 11:13:33.600855 139697095886656 deprecation.py:337] From /usr/local/lib/python3.10/dist-packages/object_detection/builders/dataset_builder.py:235: DatasetV1.map_with_legacy_function (from tensorflow.python.data.ops.dataset_ops) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.data.Dataset.map() WARNING:tensorflow:From /usr/local/lib/python3.10/dist-packages/tensorflow/python/util/dispatch.py:1082: sparse_to_dense (from tensorflow.python.ops.sparse_ops) is deprecated and will be removed in a future version. Instructions for updating: Create a `tf.sparse.SparseTensor` and use `tf.sparse.to_dense` instead. W0619 11:13:42.011853 139697095886656 deprecation.py:337] From /usr/local/lib/python3.10/dist-packages/tensorflow/python/util/dispatch.py:1082: sparse_to_dense (from tensorflow.python.ops.sparse_ops) is deprecated and will be removed in a future version. Instructions for updating: Create a `tf.sparse.SparseTensor` and use `tf.sparse.to_dense` instead. WARNING:tensorflow:From /usr/local/lib/python3.10/dist-packages/tensorflow/python/util/dispatch.py:1082: sample_distorted_bounding_box (from tensorflow.python.ops.image_ops_impl) is deprecated and will be removed in a future version. Instructions for updating: `seed2` arg is deprecated.Use sample_distorted_bounding_box_v2 instead. W0619 11:13:46.410009 139697095886656 deprecation.py:337] From /usr/local/lib/python3.10/dist-packages/tensorflow/python/util/dispatch.py:1082: sample_distorted_bounding_box (from tensorflow.python.ops.image_ops_impl) is deprecated and will be removed in a future version. Instructions for updating: `seed2` arg is deprecated.Use sample_distorted_bounding_box_v2 instead. WARNING:tensorflow:From /usr/local/lib/python3.10/dist-packages/tensorflow/python/util/dispatch.py:1082: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.cast` instead. W0619 11:13:48.317142 139697095886656 deprecation.py:337] From /usr/local/lib/python3.10/dist-packages/tensorflow/python/util/dispatch.py:1082: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.cast` instead.
@@OsamaShamim The problem happens if you use softwares other than labelimg for annotations If you did so try to convert them to the same annotation as labelimg and it will work
its working thankyouuuuuuu edje electronics, i transfer it to raspberry pi also and it did not work at first time i just update some lib and it worked now thankyouuu thankyouuu super awsomeeee
No fucking way u are givin us for free this kind of information, i love u so much, i have spend the last 2 months trying do a tensorFlow Object Detecion works, u deserve the best things of world can provide thanks thanks thankssssssssssssssssssssssssssssssss
Hello and thank you again for this tutorial! Today I was successfully able to get to the training step with my own dataset but I unfortunately forgot to turn my computer's sleep mode off. When I went to go back through the steps I keep getting an error Step 4 Cell 4 commented: "# Set file locations and get number of classes for config file". The error says "NotFoundError: /content/labelmap.pbtxt; No such file or directory" I can share a screenshot if you like. I've ran through the notebook several times and have also reset it with your most current version and I keep getting this. And in fact, there is no labelmap.pbtxt in my files. There is a labelmap.txt though! Thank you!
I ran this in a seperate cell to create the .pbtxt file: path_to_labeltxt = os.path.join(os.getcwd(), 'labelmap.txt') with open(path_to_labeltxt, 'r') as f: labels = [line.strip() for line in f.readlines()] path_to_labelpbtxt = os.path.join(os.getcwd(), 'labelmap.pbtxt') with open(path_to_labelpbtxt,'w') as f: for i, label in enumerate(labels): f.write('item { ' + ' id: %d ' % (i + 1) + ' name: \'%s\' ' % label + '} ' + ' ')
Still is one of the best tutorial for model training in RUclips. But I have a suggestion, could there be an option to add metadata scripts to the notebook? this could be highly useful since of the release of googles ML kit for android & ios development.
hello, I have a question. I bought an edge tpu coral for more camera fps, I also used the file that downloaded in the final part which is for edge tpu. It works but it cannot easily detect objects when the coral is plugged in, but if the edge tpu is disconnected, it runs at 2 fps but it always detect the object. May I know if there’s a fix on this one? Thank you!
Hi, I have encountered a problam everything works fine until i get to Start training a model. It displays log text but after about 3 minutes RAM usage go to max and script just exit. Please HELP!! i have this for a school project and I'm running out of time. Thanks
I am stuck on step 5.2 (# Run training!). I'm not sure what the problem(s) is/are but I'm getting this: "Instructions for updating: Use fn_output_signature instead W0604 13:28:51.228111 139822568032000 deprecation.py:569] From /usr/local/lib/python3.10/dist-packages/tensorflow/python/util/deprecation.py:648: calling map_fn_v2 (from tensorflow.python.ops.map_fn) with dtype is deprecated and will be removed in a future version."
Hello! Awesome video btw! I tried to create my own model but i start to run training block, this error shows: TensorFlow Addons (TFA) has ended development and introduction of new features. TFA has entered a minimal maintenance and release mode until a planned end of life in May 2024. Please modify downstream libraries to take dependencies from other repositories in our TensorFlow community (e.g. Keras, Keras-CV, and Keras-NLP). any help with this? Really appreciate it!!
I have a problem when i move to 5 steps. It just stuck here /usr/local/lib/python3.10/dist-packages/tensorflow_io/python/ops/__init__.py:98: UserWarning: unable to load libtensorflow_io_plugins.so: unable to open file: libtensorflow_io_plugins.so, from paths: ['/usr/local/lib/python3.10/dist-packages/tensorflow_io/python/ops/libtensorflow_io_plugins.so'] caused by: ['/usr/local/lib/python3.10/dist-packages/tensorflow_io/python/ops/libtensorflow_io_plugins.so: undefined symbol: _ZN3tsl5mutexC1Ev'] warnings.warn(f"unable to load libtensorflow_io_plugins.so: {e}") /usr/local/lib/python3.10/dist-packages/tensorflow_io/python/ops/__init__.py:104: UserWarning: file system plugins are not loaded: unable to open file: libtensorflow_io.so, from paths: ['/usr/local/lib/python3.10/dist-packages/tensorflow_io/python/ops/libtensorflow_io.so'] caused by: ['/usr/local/lib/python3.10/dist-packages/tensorflow_io/python/ops/libtensorflow_io.so: undefined symbol: _ZNK10tensorflow4data11DatasetBase8FinalizeEPNS_15OpKernelContextESt8functionIFN3tsl8StatusOrISt10unique_ptrIS1_NS5_4core15RefCountDeleterEEEEvEE'] warnings.warn(f"file system plugins are not loaded: {e}") /usr/local/lib/python3.10/dist-packages/tensorflow_addons/utils/tfa_eol_msg.py:23: UserWarning: TensorFlow Addons (TFA) has ended development and introduction of new features. TFA has entered a minimal maintenance and release mode until a planned end of life in May 2024. Please modify downstream libraries to take dependencies from other repositories in our TensorFlow community (e.g. Keras, Keras-CV, and Keras-NLP). For more information see: github.com/tensorflow/addons/issues/2807 warnings.warn( /usr/local/lib/python3.10/dist-packages/tensorflow_addons/utils/ensure_tf_install.py:53: UserWarning: Tensorflow Addons supports using Python ops for all Tensorflow versions above or equal to 2.10.0 and strictly below 2.13.0 (nightly versions are not supported). The versions of TensorFlow you are currently using is 2.8.0 and is not supported. Some things might work, some things might not. If you were to encounter a bug, do not file an issue. If you want to make sure you're using a tested and supported configuration, either change the TensorFlow version or the TensorFlow Addons's version. You can find the compatibility matrix in TensorFlow Addon's readme: github.com/tensorflow/addons warnings.warn( 2023-06-01 16:23:47.452097: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:39] Overriding allow_growth setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0. INFO:tensorflow:Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0',) I0601 16:23:47.463217 139994178451264 mirrored_strategy.py:374] Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0',) INFO:tensorflow:Maybe overwriting train_steps: 40000 I0601 16:23:47.466481 139994178451264 config_util.py:552] Maybe overwriting train_steps: 40000 INFO:tensorflow:Maybe overwriting use_bfloat16: False I0601 16:23:47.466638 139994178451264 config_util.py:552] Maybe overwriting use_bfloat16: False WARNING:tensorflow:From /usr/local/lib/python3.10/dist-packages/object_detection/model_lib_v2.py:563: StrategyBase.experimental_distribute_datasets_from_function (from tensorflow.python.distribute.distribute_lib) is deprecated and will be removed in a future version. Instructions for updating: rename to distribute_datasets_from_function W0601 16:23:47.495306 139994178451264 deprecation.py:337] From /usr/local/lib/python3.10/dist-packages/object_detection/model_lib_v2.py:563: StrategyBase.experimental_distribute_datasets_from_function (from tensorflow.python.distribute.distribute_lib) is deprecated and will be removed in a future version. Instructions for updating: rename to distribute_datasets_from_function INFO:tensorflow:Reading unweighted datasets: ['/content/train.tfrecord'] I0601 16:23:47.500375 139994178451264 dataset_builder.py:162] Reading unweighted datasets: ['/content/train.tfrecord'] INFO:tensorflow:Reading record datasets for input file: ['/content/train.tfrecord'] I0601 16:23:47.500573 139994178451264 dataset_builder.py:79] Reading record datasets for input file: ['/content/train.tfrecord'] INFO:tensorflow:Number of filenames to read: 1 I0601 16:23:47.500660 139994178451264 dataset_builder.py:80] Number of filenames to read: 1 WARNING:tensorflow:num_readers has been reduced to 1 to match input file shards. W0601 16:23:47.500752 139994178451264 dataset_builder.py:86] num_readers has been reduced to 1 to match input file shards. WARNING:tensorflow:From /usr/local/lib/python3.10/dist-packages/object_detection/builders/dataset_builder.py:100: parallel_interleave (from tensorflow.python.data.experimental.ops.interleave_ops) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.data.Dataset.interleave(map_func, cycle_length, block_length, num_parallel_calls=tf.data.AUTOTUNE)` instead. If sloppy execution is desired, use `tf.data.Options.deterministic`. W0601 16:23:47.503006 139994178451264 deprecation.py:337] From /usr/local/lib/python3.10/dist-packages/object_detection/builders/dataset_builder.py:100: parallel_interleave (from tensorflow.python.data.experimental.ops.interleave_ops) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.data.Dataset.interleave(map_func, cycle_length, block_length, num_parallel_calls=tf.data.AUTOTUNE)` instead. If sloppy execution is desired, use `tf.data.Options.deterministic`. WARNING:tensorflow:From /usr/local/lib/python3.10/dist-packages/object_detection/builders/dataset_builder.py:235: DatasetV1.map_with_legacy_function (from tensorflow.python.data.ops.dataset_ops) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.data.Dataset.map() W0601 16:23:47.525164 139994178451264 deprecation.py:337] From /usr/local/lib/python3.10/dist-packages/object_detection/builders/dataset_builder.py:235: DatasetV1.map_with_legacy_function (from tensorflow.python.data.ops.dataset_ops) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.data.Dataset.map() WARNING:tensorflow:From /usr/local/lib/python3.10/dist-packages/tensorflow/python/util/dispatch.py:1082: sparse_to_dense (from tensorflow.python.ops.sparse_ops) is deprecated and will be removed in a future version. Instructions for updating: Create a `tf.sparse.SparseTensor` and use `tf.sparse.to_dense` instead. W0601 16:23:54.089589 139994178451264 deprecation.py:337] From /usr/local/lib/python3.10/dist-packages/tensorflow/python/util/dispatch.py:1082: sparse_to_dense (from tensorflow.python.ops.sparse_ops) is deprecated and will be removed in a future version. Instructions for updating: Create a `tf.sparse.SparseTensor` and use `tf.sparse.to_dense` instead. WARNING:tensorflow:From /usr/local/lib/python3.10/dist-packages/tensorflow/python/util/dispatch.py:1082: sample_distorted_bounding_box (from tensorflow.python.ops.image_ops_impl) is deprecated and will be removed in a future version. Instructions for updating: `seed2` arg is deprecated.Use sample_distorted_bounding_box_v2 instead. W0601 16:23:58.477204 139994178451264 deprecation.py:337] From /usr/local/lib/python3.10/dist-packages/tensorflow/python/util/dispatch.py:1082: sample_distorted_bounding_box (from tensorflow.python.ops.image_ops_impl) is deprecated and will be removed in a future version. Instructions for updating: `seed2` arg is deprecated.Use sample_distorted_bounding_box_v2 instead. WARNING:tensorflow:From /usr/local/lib/python3.10/dist-packages/tensorflow/python/util/dispatch.py:1082: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.cast` instead. W0601 16:24:00.966233 139994178451264 deprecation.py:337] From /usr/local/lib/python3.10/dist-packages/tensorflow/python/util/dispatch.py:1082: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.cast` instead.
Great job man, I love the work you've done ❤ I'm a complete beginner at this, I wanted to ask if it would be possible to add to the notebook on colab a way to save progress and then continue it in another session. When I train the model I get up to 20000 steps but then I get disconnected from colab (having the free version), I was thinking if before I get disconnected I could stop the training and then later (when colab allows it again) continue it.
Hi, what I did was to replace the directories in the colab with “/mydrive/” so that everything is saved on google drive, for example instead of “!mkdir /content/images” I have “!mkdir /mydrive/images”. It is still quite risky because you are doing operations in your google drive. That's it ! I can't post my changes now because they are quite specific to my case (and I don't know if it always works), but this is how you can do it too.
Hello! Really informative video, I just wanted to know the process stays or less the same even if im on a Mac while making the project right? Is there anything I should be aware of? Furthermore its possible for me to make it so that the software detects when I am bending/bent towards the right/left and give an output, or because its the same person it wont be able to recognise that? thanks!
Thanks! It should all work inside the web browser, even if you're on a mac. However, I still haven't written the guide for how to take the downloaded model and run it on macOS. You should take a stab at it though! In fact, I'd pay you $50USD if you can write a macOS deployment guide similar to the one I wrote for Windows (github.com/EdjeElectronics/TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi/blob/master/deploy_guides/Windows_TFLite_Guide.md). Email me if you're interested in that - info@ejtech.io ! For detecting which way you are bent, I would try using an OpenPose model (look it up on RUclips to see what I mean).
Hi! I'm getting an error when training a custom model (Train Custom TFLite Detection Model) and a warning: TensorFlow Addons (TFA) has ended development and introduction of new features. TFA has entered a minimal maintenance and release mode until a planned end of life in May 2024. Please modify downstream libraries to take dependencies from other repositories in our TensorFlow community (e.g. Keras, Keras-CV, and Keras-NLP).
Great tutorial!! Great introductory experience while also providing everything needed for learning rabbit holes. If you come across Step 5 ending with ^c, it is because you are running out of memory. Either resize your images to be large/medium (~200-300KB per image), upload less images, or both.
Thank You for your tutorial, it has been of great help to me but, I when upload the images folder and run it to distribute data for training, validation and test it show me zero images in total Please guide me in this regard
Im having a problem with the training. it stops after a while with just a ^C. I looked at your common errors and they say to lower the batch size which i did. I went to a batch size of 2 and still end up with the same result.
when training the AI it shows an error that "TFA has entered a minimal maintenance and release mode until a planned end of life in May 2024". How do I solve this?
Hello Sir, Thanks for your great tutorial. I try to run the code you shared, but I found error while running at 10:37 part or in "set up training configuration part". it returns "NotFoundError: /content/labelmap.pbtxt; No such file or directory". What should I do to fix that sir?
Nice and clear step-by-step video! However, I cannot run it smoothly at my side. At section3.3, no labelmap.pbtxt is generated so the later step in section 4 crash. Would you know the reason ?
That was so detailed and easy process! Thank you so much for the effort you have been given for all those videos. Quick question, what if somebody wants to use rstp streams instead of raspberry or USB camera on the Raspberry Pi, what should be change and where? Thanks again.
i am also got this problem. I think we have this problem because memory was not enough. You can checked your dataset, especially size archive your dataset! If this workout, please response there
When I run model to test the installation, I get the following error, is there a fix for this?? Traceback (most recent call last): File "/content/models/research/object_detection/builders/model_builder_tf2_test.py", line 21, in import tensorflow.compat.v1 as tf File "/usr/local/lib/python3.10/dist-packages/tensorflow/__init__.py", line 37, in from tensorflow.python.tools import module_util as _module_util File "/usr/local/lib/python3.10/dist-packages/tensorflow/python/__init__.py", line 37, in from tensorflow.python.eager import context File "/usr/local/lib/python3.10/dist-packages/tensorflow/python/eager/context.py", line 29, in from tensorflow.core.framework import function_pb2 File "/usr/local/lib/python3.10/dist-packages/tensorflow/core/framework/function_pb2.py", line 16, in from tensorflow.core.framework import attr_value_pb2 as tensorflow_dot_core_dot_framework_dot_attr__value__pb2 File "/usr/local/lib/python3.10/dist-packages/tensorflow/core/framework/attr_value_pb2.py", line 16, in from tensorflow.core.framework import tensor_pb2 as tensorflow_dot_core_dot_framework_dot_tensor__pb2 File "/usr/local/lib/python3.10/dist-packages/tensorflow/core/framework/tensor_pb2.py", line 16, in from tensorflow.core.framework import resource_handle_pb2 as tensorflow_dot_core_dot_framework_dot_resource__handle__pb2 File "/usr/local/lib/python3.10/dist-packages/tensorflow/core/framework/resource_handle_pb2.py", line 16, in from tensorflow.core.framework import tensor_shape_pb2 as tensorflow_dot_core_dot_framework_dot_tensor__shape__pb2 File "/usr/local/lib/python3.10/dist-packages/tensorflow/core/framework/tensor_shape_pb2.py", line 36, in _descriptor.FieldDescriptor( File "/usr/local/lib/python3.10/dist-packages/google/protobuf/descriptor.py", line 553, in __new__ _message.Message._CheckCalledFromGeneratedFile() TypeError: Descriptors cannot be created directly. If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0. If you cannot immediately regenerate your protos, some other possible workarounds are: 1. Downgrade the protobuf package to 3.20.x or lower. 2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower). More information: developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates
Really good tutorial, I am following along for my school project, I can get as far as to the training, then it won't start training in Colab, in the log, it shows that the TensorFlow Addons needs tf version 2.12 - 2.15.0, would that be the case? I can see your video using python 3.8, currently the colab is run on python 3.10, could that be the reason? Could you please help? Thank you
hello I want to ask when i was running the python script on 3.2 for splitting images into train, validation and test folders. I have this problem: Total images: 0 Images moving to train: 0 Images moving to validation: 0 Images moving to test: 0 Looking forward to the response! Thank u!
Hmm... was your images folder named "images.zip"? Can you see any files in the "images/all" folder in the filemenu on the left side bar? What are the file extensions for your images (.jpg, .bmp, something else)?
@@EdjeElectronics oh the name was car images and i changed it and it got fixed, however the script doesnt take in jpeg file does it? Because when it was splitting I still had some file in the all file mostly jpeg format
@@EdjeElectronics urwell, anyways i wanna ask so i basically wanted to make an automatic license plate reader using object detection for residential area and I sort of want it to read the letters and numbers in the license plate, if it was you would you prefer to train the model so it can read the letters or use OCR to detect the letters from the license plate? Thank you so much looking forward for your tips, tricks and answers
@@eugenioverrelwong8733 I would train a model to detect the license plates, and then crop out the detected plate and use OCR to read the letters. Actually, I just saw a blog post talking about this exact topic! Check it out here: blog.roboflow.com/how-to-crop-computer-vision-model-predictions/
Hello, I've been running into issues at 8:25. I keep getting errors with python claiming that there's no such file or directory as 'libcuda.so.1'. Anyone understand this error???
Hi! I’m on the last step of the process I use pi camera v3 for my camera I do not know what’s causing this but the frame=frame1.copy says that Error:’NoneType’ object has no attribute ‘copy’. What can I do to fix this?
I am having and issue with empty ".tfrecord" files: I have followed all steps and found workarounds for all bugs, but whenever I try and finish setting up the model to train, the train.tfrecord and val.tfrecord files are both empty and then I get stuck on "Use 'tf.cast' instead" during training. I have used two different labelling softwares for labelling my images, RectLabel and Labelimg, and have manually edited my xml files to look exactly as the xml files in the coin dataset, but for some reason its just not working. Any help is appreciated.
When running the model builder test file, this error occurs: Traceback (most recent call last): File "/content/models/research/object_detection/builders/model_builder_tf2_test.py", line 24, in from object_detection.builders import model_builder ModuleNotFoundError: No module named 'object_detection' Nothing was changed in the code. I just executed the python code as it is in the Colab notebook.
Yep, others have been encountering this same problem lately. Please see this issue for how to fix it. I'm planning to implement this fix in the next couple days. github.com/EdjeElectronics/TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi/issues/171
I have a problem on 3.2 Split images, I have uploaded my picture that I already convert into zip file. But then after I ran 3.2, it said that [images.zip] End-of-central-directory signature not found. Either this file is not a zipfile, or it constitutes one disk of a multi-part archive. In the latter case the central directory and zipfile comment will be found on the last disk(s) of this archive. unzip: cannot find zipfile directory in one of images.zip or images.zip.zip, and cannot find images.zip.ZIP, period. Please help me. Thank you
ANSWER: if anyone face the same thing, make sure recheck your picture file, make sure it is in jpg @ JPG @ png @ bmp. Mine is JPeG so I have to convert it then could proceed with the project.
Great tutorial! I've been encountering a problem though with my dataset. All file names are correct, also the file extensions, and have tried all options of adding a dataset (upload, gdrive mounting, dropbox) I even checked the csv file and all are intact. However, when its time to create the tfrecord files, there appear to be some files missing in both the train and validation folders, thus I can't generate the pbtxt file. Any tips?
Thank you! Hmm, usually if there's an issue with the pbtxt file, it's because there's a problem with the annotation data (like a typo in a class name). Can you try using my coin dataset and see if you get the same issue?
Why it could not detect address file that has been built. For step 7 or 8,it says that no gouth file found. And after every step has been done, there is no images
Sigh, sorry guys. It looks like something changed in Colab or in TensorFlow, and now it's not able to run training on the GPU. I get several error messages like this when I try to run training in Step 5: " Could not load dynamic library 'libcudart.so.11.0' ". This means it's not able to load the CUDA libraries needed to run on the GPU. I'm not sure why. If anyone can figure out the problem, please post a solution. I'll dive into it after New Year's if no one has found anything by then. EDIT: I'll send $50 USD to the first person who can figure out a reliable solution and post it. And I'd also be highly interested in hiring you for a paid internship at ejtech.io 😃
Getting this error "NotFoundError: /content/labelmap.pbtxt; No such file or directory" at step 28, If I try set file locaton, does anyone had same issue, please someone help me with this
Hello. Please, is there a way to change the label style (e.g., font and size) and the colour of the bounding boxes for each of the classes drawn on the test dataset (the detection results)? Thanks
Nope, not really :( . You may want to look at YOLOv5 or YOLOv8, the tools for training those models will report the model mAP, precision, recall, etc on every training epoch.
Thank you for the tutorial. I can't get past Step 2 where we run Model Builder test file. I get the error "ModuleNotFoundError: No module named 'object_detection'" This is after restarting the Colab notebook multiple times, and I found that it led to errors later down the road. Can you please help me? Thank you.
@@EdjeElectronics we are having the same issue. We were in the middle of training and about 37000 steps in, we got a "runtime disconnected" error that killed it. We then got the same "ModuleNotFoundError" on Step 2.
Well, from what I can tell, it looks like the object_detection module isn't installing properly. The install command (!pip install /content/models/research/) errors and exits out when trying to build PyYAML. Still digging!
Okay, I think I figured it out! I added "!pip install pyyaml==5.3" before the "!pip install /content/models/research/" command in Step 2, and now the tf2_model_builder.py test works. I successfully started training a model. I'll update the notebook with this change.
Any chance you can update the instructions to show how to convert the model to tensorflow.js? I've followed the video and it all worked but running the model through tensorflowjs_converter results in a 16kb .bin file which is obviously not correct. Fantastic tutorial - thanks!!
That'd be awesome. Thanks@@EdjeElectronics. Here's the docs that I've read - I "think" that your tutorial creates a "shared model" and then you convert from that but I might be wrong
Hi EJ Technology, does this tutorial still work with the current version of Python (3.11.11) on Google Colab? Some libraries are no longer supported in version 3.11.11 python 3.11.11 doesn't support pyyaml 5.3.1 anymore
Is there any advice with big dataset? it will took more than 2 hours which will consume GPU usage and in the end it will cause google colab to stop due to GPU limit
Do you need to upgrade to Colab Pro for 100 compute units to complete the training? I trained for 1.5 hours but I did not see any progress on the Tensor Dashboard. My dataset consists of 200 images with objects labelled and XML files and only one class object, I used the default "ssd-mobilenet-v2-fpnlite-320" tensor model with a batch size is 16 images and num steps is 40,000 steps. I left my dataset training overnight, but it timed out. Now I have 0 compute unit so I can not connect to the backend GPU. Google does not see when I will get free compute unit again but suggested that I either upgrade to Colab Pro or Google Cloud Platform marketplace.
I tried on my own flower dataset. While calculating mAP it's taking only 10 images and finishing inferencing. My test folder have 61 images and it's showing error of no file existing of remaining 51 images.
Really thank you very much for the tutorial I was able to do the inference on a Raspberry Pi. However, just after the entire model I evaluated the precision of the model on the training and test data but the script that I I wrote to myself about errors since the images were saved in tfrecord format. Could I have a hand lesson?
Thanks for the great content. I have some doubts, the video displaying is running at 2.7 FPS with Raspberry Pi, with you it reaches about 5 FPS, why my one is slower?
Hey bro, were you able to complete the custom object detection... I had an issue during the training process whereby it will stop after 50s. Usually it should take several hours. I also noticed that my system Ram reached about 12.1Gb before it stopped... did yours do the same? I would be really grateful for some guidance...
i get errors in STEP # Create CSV data files and TFRecord files - is there any change? ERROR: FileNotFoundError: [Errno 2] No such file or directory: 'images/validation_labels.csv'
It may happen that the create_csv.py script is located in content/models/mymodel/. It happened to me too. But it should be in content/. Move it or adjust the paths in the script.
3.2 step has a problem. I tried renaming my file/dataset to "images" but it just keeps giving me the error: mkdir: cannot create directory ‘/content/images’: File exists [images.zip] End-of-central-directory signature not found. Either this file is not a zipfile, or it constitutes one disk of a multi-part archive. In the latter case the central directory and zipfile comment will be found on the last disk(s) of this archive. unzip: cannot find zipfile directory in one of images.zip or images.zip.zip, and cannot find images.zip.ZIP, period. mkdir: cannot create directory ‘/content/images/train’: File exists mkdir: cannot create directory ‘/content/images/validation’: File exists mkdir: cannot create directory ‘/content/images/test’: File exists Even if I did see the images folder, they were empty I made sure to send a ZIP file(or maybe my winrar is lying to me since the file i dragged is WINRAR ZIP FILE)
Awesome vid! But it didn't work. Keep getting a ^c when I tried to run my training. SO each time I reflashed the window, it just showed 3 pictures from my training folder?
Waow, thanks for your quick response sir! Hmm, yes turn out to work just fine with your dataset. I followed the video quite close, and I can´t really see the issue? I was uploading from google Drive? @@EdjeElectronics
@Edje Electronics - I can see now, that after I ran your 3.2 split images into test, validation, and train folders. My test folder is empty, and the same goes for my validations folder. My train folder holds now 11 pictures? In the folder with all images, I have 184 images for .jpg and 184 for XML. So it seems like there is an issue with the script.
The best way to train EfficentDet is to use the TFLite Model Maker (just search "tflite model maker object detection" on Google). It isn't much faster though. You can check the FPS it gets in comparison to other models at my blog post here: www.ejtech.io/learn/tflite-object-detection-model-comparison
I have a recurring problem when beginning the training, it will suddenly stop with the C^ as the last thing outputted? I'm not pressing anything or clicking stop?
I ran into a problem at step 4, to be precise before the warning warning on Install TensorFlow Object Detection Dependencies. there is an error like this ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behavior is the source of the following dependency conflicts. numba 0.56.4 requires numpy=1.18, but you have numpy 1.24.3 which is incompatible. what's the solution?
Good afternoon, great contribution that you really make, congratulations, a question when doing the training is paralyzed and does not continue on API in Colab, what could happen?
Sorry, I haven't encountered that problem before. My guess is to restart the Colab from scratch and try to run through it again. Otherwise, I'm not sure how to solve it.
can this code be adapted to not only identify a coin but to recognise the date and images from a dual camera setup? I want to identify collectible coins as they pass by the cameras then either accept or reject them.....
For some reason, the training section it would only do it for like 10-15 minutes and then the execution is complete. I'm not sure what I have done wromg but i have around 200 labelled photos in the images.zip. Could i have missed a step? edit: and also it's not showing the graphs when i refresh the tensorboard
I am novice in AI objects learning, but it seems me this video will help me. Here is good text and description, but video resolution is rather poor, it is difficult to watch what is on author' s monitor. Meanwhile thank you for description and links.
Ive been banging my head on my desk trying to find a good tutorial. Big thank
hands down the best tutorial for TensorFlow Lite, I learnt a lot thanks
Really excited your back! Thanks for the video.
Your tutorial is the best on the internet. Exactly what I needed! Thank you! Thank you! Thank you!!!
Amazing tutorial! One of the most useful videos I've ever watched. Keep up the good work!
Thanks a lot, Edje!
Really nice video. Anxiously waiting for an Android deploying tutorial!
for anyone who noticed that not all images are getting moved to its respective folders, and that later it gives some error that a file is missing: make sure all your images end with .jpg instead of .jpeg since all the scripts on the github page only takes into account those files
Hey all, just a quick update (April 10 2024). I ran through the full notebook today with my coin dataset, and everything worked without errors. A "Restart Session" option appears during Step 1 after running the last set of install commands. When it appears, click the "Restart Session" option, and then keep working through the steps. If you're getting errors, try using my coin dataset and seeing if it works (it should). Then, compare your annotation files with the annotation files from the coin dataset. It's likely there's a difference in your annotation files that are causing problems.
Hello Edje, thanks for this great tutorial! While following exactly the steps in your notebook, I ran into an error repetitively when running model_builder_tf2_test.py: "Could not load dynamic library 'libcudart.so.11.0'; dlerror:.." Any idea of how to solve it?
On Step 5 Training the model there's this another error that most of us are getting now, can you try checking it out ?
The error goes like this,
"TensorFlow Addons (TFA) has ended development and introduction of new features.
TFA has entered a minimal maintenance and release mode until a planned end of life in May 2024.
Please modify downstream libraries to take dependencies from other repositories in our TensorFlow community (e.g. Keras, Keras-CV, and Keras-NLP)."
Followed by some errors about not identifying variables like {num_steps}, pipeline_file and model_dir that we indeed declared just some snippets ago, really frustrating error
Hello, to increase performance, you can use multithreaded computing. To do this, when loading the model, specify the "num_threads" argument, which must contain the number of threads that the processor supports. Basically, TFLite uses only one core.
The code:
import multiprocessing
tf.lite.Interpreter(model_path=PATH_TO_CKPT, num_threads=multiprocessing.cpu_count() )
On my old laptop with dual-core CPU, this gave a double increase in performance.
Basic - 140 ms per frame.
With two cores - 85 ms per frame
Thanks, I'll have to try that!
This works like a charm bro! On a pi 4, My fps went from 8fps to 20fps when using the google sample tflite model!🤯
@@khamismuniru5188 how
@@SumitKumaar321 Section 7.1. In the line -> interpreter = Interpreter(model_path=modelpath)
@@fe_nik_s776 you mean like this?
import multiprocessing
tf.lite.Interpreter(model_path=PATH_TO_CKPT, num_threads=multiprocessing.cpu_count() )
interpreter = Interpreter(model_path=modelpath)
Thank you for a wonderful breakdown of all the needed steps to do training on colab ❤❤❤ Your follower from Egypt 🥰
Thank you! Clear and easy to understand while having sprinkles of humor made this video really educational AND enjoyable. Easiest like and subscribe.
Thanks for the kind words! 😸
@@EdjeElectronics There are a few persistent warnings in the notebook now because of which it does not run anymore. Kindly check the comment sections and reply. The training does not commence in step 5. It worked once when we tried the notebook on a smaller dataset but then now we are trying to deploy the full dataset it shows this issue and does not run.
Thankyou so much you made my day, I was searching all internet for working colab notebook for this model.
Thank you for a wonderful breakdown of all the needed steps to do training on colab.
@EdjeElectronics,
I had a bug whereby on step 5, the training stops about a minute :( I practically did everything according to the tutorial. The code ends with a ^C symbol. Also, I noticed that the system RAM skyrocketed to 12.1Gb before the training process stopped... Could this be the issue? I would be really grateful if you could provide me some guidance please.
i have the same issue ! I'm pretty much sure this is beacause of the ram capabilities... I gess we have to subscibe to googlecolab pro
Your images are too large and you're running out of memory. Resize them to be around 200-300KB, don't upload too many images, and you should be good to go
I got an error about labelmap.pbtxt at first so i created it myself
The code gets stuck in the training part
/usr/local/lib/python3.10/dist-packages/tensorflow_io/python/ops/__init__.py:98: UserWarning: unable to load libtensorflow_io_plugins.so: unable to open file: libtensorflow_io_plugins.so, from paths: ['/usr/local/lib/python3.10/dist-packages/tensorflow_io/python/ops/libtensorflow_io_plugins.so']
caused by: ['/usr/local/lib/python3.10/dist-packages/tensorflow_io/python/ops/libtensorflow_io_plugins.so: undefined symbol: _ZN3tsl5mutexC1Ev']
warnings.warn(f"unable to load libtensorflow_io_plugins.so: {e}")
/usr/local/lib/python3.10/dist-packages/tensorflow_io/python/ops/__init__.py:104: UserWarning: file system plugins are not loaded: unable to open file: libtensorflow_io.so, from paths: ['/usr/local/lib/python3.10/dist-packages/tensorflow_io/python/ops/libtensorflow_io.so']
caused by: ['/usr/local/lib/python3.10/dist-packages/tensorflow_io/python/ops/libtensorflow_io.so: undefined symbol: _ZNK10tensorflow4data11DatasetBase8FinalizeEPNS_15OpKernelContextESt8functionIFN3tsl8StatusOrISt10unique_ptrIS1_NS5_4core15RefCountDeleterEEEEvEE']
warnings.warn(f"file system plugins are not loaded: {e}")
/usr/local/lib/python3.10/dist-packages/tensorflow_addons/utils/tfa_eol_msg.py:23: UserWarning:
TensorFlow Addons (TFA) has ended development and introduction of new features.
TFA has entered a minimal maintenance and release mode until a planned end of life in May 2024.
Please modify downstream libraries to take dependencies from other repositories in our TensorFlow community (e.g. Keras, Keras-CV, and Keras-NLP).
For more information see: github.com/tensorflow/addons/issues/2807
warnings.warn(
/usr/local/lib/python3.10/dist-packages/tensorflow_addons/utils/ensure_tf_install.py:53: UserWarning: Tensorflow Addons supports using Python ops for all Tensorflow versions above or equal to 2.10.0 and strictly below 2.13.0 (nightly versions are not supported).
The versions of TensorFlow you are currently using is 2.8.0 and is not supported.
Some things might work, some things might not.
If you were to encounter a bug, do not file an issue.
If you want to make sure you're using a tested and supported configuration, either change the TensorFlow version or the TensorFlow Addons's version.
You can find the compatibility matrix in TensorFlow Addon's readme:
github.com/tensorflow/addons
warnings.warn(
2023-06-19 11:13:33.517675: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:39] Overriding allow_growth setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.
INFO:tensorflow:Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0',)
I0619 11:13:33.530389 139697095886656 mirrored_strategy.py:374] Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0',)
INFO:tensorflow:Maybe overwriting train_steps: 20000
I0619 11:13:33.534287 139697095886656 config_util.py:552] Maybe overwriting train_steps: 20000
INFO:tensorflow:Maybe overwriting use_bfloat16: False
I0619 11:13:33.534482 139697095886656 config_util.py:552] Maybe overwriting use_bfloat16: False
WARNING:tensorflow:From /usr/local/lib/python3.10/dist-packages/object_detection/model_lib_v2.py:563: StrategyBase.experimental_distribute_datasets_from_function (from tensorflow.python.distribute.distribute_lib) is deprecated and will be removed in a future version.
Instructions for updating:
rename to distribute_datasets_from_function
W0619 11:13:33.569274 139697095886656 deprecation.py:337] From /usr/local/lib/python3.10/dist-packages/object_detection/model_lib_v2.py:563: StrategyBase.experimental_distribute_datasets_from_function (from tensorflow.python.distribute.distribute_lib) is deprecated and will be removed in a future version.
Instructions for updating:
rename to distribute_datasets_from_function
INFO:tensorflow:Reading unweighted datasets: ['/content/train.tfrecord']
I0619 11:13:33.572712 139697095886656 dataset_builder.py:162] Reading unweighted datasets: ['/content/train.tfrecord']
INFO:tensorflow:Reading record datasets for input file: ['/content/train.tfrecord']
I0619 11:13:33.572935 139697095886656 dataset_builder.py:79] Reading record datasets for input file: ['/content/train.tfrecord']
INFO:tensorflow:Number of filenames to read: 1
I0619 11:13:33.573024 139697095886656 dataset_builder.py:80] Number of filenames to read: 1
WARNING:tensorflow:num_readers has been reduced to 1 to match input file shards.
W0619 11:13:33.573094 139697095886656 dataset_builder.py:86] num_readers has been reduced to 1 to match input file shards.
WARNING:tensorflow:From /usr/local/lib/python3.10/dist-packages/object_detection/builders/dataset_builder.py:100: parallel_interleave (from tensorflow.python.data.experimental.ops.interleave_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Dataset.interleave(map_func, cycle_length, block_length, num_parallel_calls=tf.data.AUTOTUNE)` instead. If sloppy execution is desired, use `tf.data.Options.deterministic`.
W0619 11:13:33.575423 139697095886656 deprecation.py:337] From /usr/local/lib/python3.10/dist-packages/object_detection/builders/dataset_builder.py:100: parallel_interleave (from tensorflow.python.data.experimental.ops.interleave_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Dataset.interleave(map_func, cycle_length, block_length, num_parallel_calls=tf.data.AUTOTUNE)` instead. If sloppy execution is desired, use `tf.data.Options.deterministic`.
WARNING:tensorflow:From /usr/local/lib/python3.10/dist-packages/object_detection/builders/dataset_builder.py:235: DatasetV1.map_with_legacy_function (from tensorflow.python.data.ops.dataset_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Dataset.map()
W0619 11:13:33.600855 139697095886656 deprecation.py:337] From /usr/local/lib/python3.10/dist-packages/object_detection/builders/dataset_builder.py:235: DatasetV1.map_with_legacy_function (from tensorflow.python.data.ops.dataset_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Dataset.map()
WARNING:tensorflow:From /usr/local/lib/python3.10/dist-packages/tensorflow/python/util/dispatch.py:1082: sparse_to_dense (from tensorflow.python.ops.sparse_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Create a `tf.sparse.SparseTensor` and use `tf.sparse.to_dense` instead.
W0619 11:13:42.011853 139697095886656 deprecation.py:337] From /usr/local/lib/python3.10/dist-packages/tensorflow/python/util/dispatch.py:1082: sparse_to_dense (from tensorflow.python.ops.sparse_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Create a `tf.sparse.SparseTensor` and use `tf.sparse.to_dense` instead.
WARNING:tensorflow:From /usr/local/lib/python3.10/dist-packages/tensorflow/python/util/dispatch.py:1082: sample_distorted_bounding_box (from tensorflow.python.ops.image_ops_impl) is deprecated and will be removed in a future version.
Instructions for updating:
`seed2` arg is deprecated.Use sample_distorted_bounding_box_v2 instead.
W0619 11:13:46.410009 139697095886656 deprecation.py:337] From /usr/local/lib/python3.10/dist-packages/tensorflow/python/util/dispatch.py:1082: sample_distorted_bounding_box (from tensorflow.python.ops.image_ops_impl) is deprecated and will be removed in a future version.
Instructions for updating:
`seed2` arg is deprecated.Use sample_distorted_bounding_box_v2 instead.
WARNING:tensorflow:From /usr/local/lib/python3.10/dist-packages/tensorflow/python/util/dispatch.py:1082: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.cast` instead.
W0619 11:13:48.317142 139697095886656 deprecation.py:337] From /usr/local/lib/python3.10/dist-packages/tensorflow/python/util/dispatch.py:1082: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.cast` instead.
@EdgeElectronics please fix this as it has become a persistent error for whoever runs he script now.
@@OsamaShamim The problem happens if you use softwares other than labelimg for annotations
If you did so try to convert them to the same annotation as labelimg and it will work
@@bxoz3352 i used labelimg to make annotations. my code just keeps running at the tf.cast line and never ends.
same problem too did you fix it?
hi did u find fix
Thankkk youu sooo much for the tutorial, I've been researching on the internet but didn't get the solution. Great work you've earned a subscriber.
Really amazing video with detailed and well-explained steps. Great job!
thank you so much for uploading this video. Helped a lot!
:D
This tutorial was super helpful, thank you a lot! It is very easy to follow along and understand with step-by-step instructions
its working thankyouuuuuuu edje electronics, i transfer it to raspberry pi also and it did not work at first time i just update some lib and it worked now thankyouuu thankyouuu super awsomeeee
Amazing!! Incredible to see you using the model on another device.
Good to see you again!
Thanks! I'm going to try and hold myself to a more regular release schedule now... we'll see how it goes! 😁
No fucking way u are givin us for free this kind of information, i love u so much, i have spend the last 2 months trying do a tensorFlow Object Detecion works, u deserve the best things of world can provide thanks thanks thankssssssssssssssssssssssssssssssss
Superb..It is hard to install tensorflow api because the version incompatibilities..you handle those superbly
Hello and thank you again for this tutorial!
Today I was successfully able to get to the training step with my own dataset but I unfortunately forgot to turn my computer's sleep mode off. When I went to go back through the steps I keep getting an error Step 4 Cell 4 commented: "# Set file locations and get number of classes for config file". The error says "NotFoundError: /content/labelmap.pbtxt; No such file or directory" I can share a screenshot if you like. I've ran through the notebook several times and have also reset it with your most current version and I keep getting this. And in fact, there is no labelmap.pbtxt in my files. There is a labelmap.txt though!
Thank you!
yeah, the .pbtxt does not get created in my case either.
I ran this in a seperate cell to create the .pbtxt file:
path_to_labeltxt = os.path.join(os.getcwd(), 'labelmap.txt')
with open(path_to_labeltxt, 'r') as f:
labels = [line.strip() for line in f.readlines()]
path_to_labelpbtxt = os.path.join(os.getcwd(), 'labelmap.pbtxt')
with open(path_to_labelpbtxt,'w') as f:
for i, label in enumerate(labels):
f.write('item {
' +
' id: %d
' % (i + 1) +
' name: \'%s\'
' % label +
'}
' +
'
')
have you fix this problem yet?
Thanks for making this amazing video! It is really helpful. Look forward to the "quantize TFLite model:" video too!!!
Still is one of the best tutorial for model training in RUclips. But I have a suggestion, could there be an option to add metadata scripts to the notebook? this could be highly useful since of the release of googles ML kit for android & ios development.
I tried to find my problem in the bottom of colab, but i only found there is one setence and a cant find any thing?
I struggle a week trying the repo from Google and this video save my life thank you I would like to see a video of posenet in Raspberry Pi
is your tflite model detecting items well? because my model detecting false products
@@shivarajchangale47 yes but i mean when i tried to run It in a new OS from Raspberry PI
i unable to make labelmap.pbtxt on step 3.3, how do i fix this?
same problem
@@roeyasher1396 i can't paste here the raw code..sorry
@@rmmtech1218 please let us know what to be done or can you drop same to mail id?
Drop your mail id, I will send pdtxt file, you have to change classes there and paste at location
my model detecting constancy only 10 objects why so. have used any restriction here model should detect 10 object only
hello, I have a question. I bought an edge tpu coral for more camera fps, I also used the file that downloaded in the final part which is for edge tpu. It works but it cannot easily detect objects when the coral is plugged in, but if the edge tpu is disconnected, it runs at 2 fps but it always detect the object. May I know if there’s a fix on this one? Thank you!
Hello Remus!
Just wondering if you were able to solve this issue? :O
Help! Im stuck at step 3 it wont give me labelmap.pbtxt 8:26
have you fixed this?
@@qbotx did you fix this?
Please keep making great videos. Awesome content.
Hi, I have encountered a problam everything works fine until i get to Start training a model. It displays log text but after about 3 minutes RAM usage go to max and script just exit. Please HELP!! i have this for a school project and I'm running out of time. Thanks
Does the script end with^C? I think I encountered this error too...
Hey were you able to solve this issue?
I am stuck on step 5.2 (# Run training!). I'm not sure what the problem(s) is/are but I'm getting this:
"Instructions for updating:
Use fn_output_signature instead
W0604 13:28:51.228111 139822568032000 deprecation.py:569] From /usr/local/lib/python3.10/dist-packages/tensorflow/python/util/deprecation.py:648: calling map_fn_v2 (from tensorflow.python.ops.map_fn) with dtype is deprecated and will be removed in a future version."
That's okay, it's not an error, it's just a warning from TensorFlow saying that a function is outdated. It should still work for training!
@@EdjeElectronics Thank you for the reply!
Hello! Awesome video btw! I tried to create my own model but i start to run training block, this error shows:
TensorFlow Addons (TFA) has ended development and introduction of new features.
TFA has entered a minimal maintenance and release mode until a planned end of life in May 2024.
Please modify downstream libraries to take dependencies from other repositories in our TensorFlow community (e.g. Keras, Keras-CV, and Keras-NLP).
any help with this? Really appreciate it!!
same problem pls help.
Exactly, same here
I have a problem when i move to 5 steps. It just stuck here
/usr/local/lib/python3.10/dist-packages/tensorflow_io/python/ops/__init__.py:98: UserWarning: unable to load libtensorflow_io_plugins.so: unable to open file: libtensorflow_io_plugins.so, from paths: ['/usr/local/lib/python3.10/dist-packages/tensorflow_io/python/ops/libtensorflow_io_plugins.so']
caused by: ['/usr/local/lib/python3.10/dist-packages/tensorflow_io/python/ops/libtensorflow_io_plugins.so: undefined symbol: _ZN3tsl5mutexC1Ev']
warnings.warn(f"unable to load libtensorflow_io_plugins.so: {e}")
/usr/local/lib/python3.10/dist-packages/tensorflow_io/python/ops/__init__.py:104: UserWarning: file system plugins are not loaded: unable to open file: libtensorflow_io.so, from paths: ['/usr/local/lib/python3.10/dist-packages/tensorflow_io/python/ops/libtensorflow_io.so']
caused by: ['/usr/local/lib/python3.10/dist-packages/tensorflow_io/python/ops/libtensorflow_io.so: undefined symbol: _ZNK10tensorflow4data11DatasetBase8FinalizeEPNS_15OpKernelContextESt8functionIFN3tsl8StatusOrISt10unique_ptrIS1_NS5_4core15RefCountDeleterEEEEvEE']
warnings.warn(f"file system plugins are not loaded: {e}")
/usr/local/lib/python3.10/dist-packages/tensorflow_addons/utils/tfa_eol_msg.py:23: UserWarning:
TensorFlow Addons (TFA) has ended development and introduction of new features.
TFA has entered a minimal maintenance and release mode until a planned end of life in May 2024.
Please modify downstream libraries to take dependencies from other repositories in our TensorFlow community (e.g. Keras, Keras-CV, and Keras-NLP).
For more information see: github.com/tensorflow/addons/issues/2807
warnings.warn(
/usr/local/lib/python3.10/dist-packages/tensorflow_addons/utils/ensure_tf_install.py:53: UserWarning: Tensorflow Addons supports using Python ops for all Tensorflow versions above or equal to 2.10.0 and strictly below 2.13.0 (nightly versions are not supported).
The versions of TensorFlow you are currently using is 2.8.0 and is not supported.
Some things might work, some things might not.
If you were to encounter a bug, do not file an issue.
If you want to make sure you're using a tested and supported configuration, either change the TensorFlow version or the TensorFlow Addons's version.
You can find the compatibility matrix in TensorFlow Addon's readme:
github.com/tensorflow/addons
warnings.warn(
2023-06-01 16:23:47.452097: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:39] Overriding allow_growth setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.
INFO:tensorflow:Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0',)
I0601 16:23:47.463217 139994178451264 mirrored_strategy.py:374] Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0',)
INFO:tensorflow:Maybe overwriting train_steps: 40000
I0601 16:23:47.466481 139994178451264 config_util.py:552] Maybe overwriting train_steps: 40000
INFO:tensorflow:Maybe overwriting use_bfloat16: False
I0601 16:23:47.466638 139994178451264 config_util.py:552] Maybe overwriting use_bfloat16: False
WARNING:tensorflow:From /usr/local/lib/python3.10/dist-packages/object_detection/model_lib_v2.py:563: StrategyBase.experimental_distribute_datasets_from_function (from tensorflow.python.distribute.distribute_lib) is deprecated and will be removed in a future version.
Instructions for updating:
rename to distribute_datasets_from_function
W0601 16:23:47.495306 139994178451264 deprecation.py:337] From /usr/local/lib/python3.10/dist-packages/object_detection/model_lib_v2.py:563: StrategyBase.experimental_distribute_datasets_from_function (from tensorflow.python.distribute.distribute_lib) is deprecated and will be removed in a future version.
Instructions for updating:
rename to distribute_datasets_from_function
INFO:tensorflow:Reading unweighted datasets: ['/content/train.tfrecord']
I0601 16:23:47.500375 139994178451264 dataset_builder.py:162] Reading unweighted datasets: ['/content/train.tfrecord']
INFO:tensorflow:Reading record datasets for input file: ['/content/train.tfrecord']
I0601 16:23:47.500573 139994178451264 dataset_builder.py:79] Reading record datasets for input file: ['/content/train.tfrecord']
INFO:tensorflow:Number of filenames to read: 1
I0601 16:23:47.500660 139994178451264 dataset_builder.py:80] Number of filenames to read: 1
WARNING:tensorflow:num_readers has been reduced to 1 to match input file shards.
W0601 16:23:47.500752 139994178451264 dataset_builder.py:86] num_readers has been reduced to 1 to match input file shards.
WARNING:tensorflow:From /usr/local/lib/python3.10/dist-packages/object_detection/builders/dataset_builder.py:100: parallel_interleave (from tensorflow.python.data.experimental.ops.interleave_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Dataset.interleave(map_func, cycle_length, block_length, num_parallel_calls=tf.data.AUTOTUNE)` instead. If sloppy execution is desired, use `tf.data.Options.deterministic`.
W0601 16:23:47.503006 139994178451264 deprecation.py:337] From /usr/local/lib/python3.10/dist-packages/object_detection/builders/dataset_builder.py:100: parallel_interleave (from tensorflow.python.data.experimental.ops.interleave_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Dataset.interleave(map_func, cycle_length, block_length, num_parallel_calls=tf.data.AUTOTUNE)` instead. If sloppy execution is desired, use `tf.data.Options.deterministic`.
WARNING:tensorflow:From /usr/local/lib/python3.10/dist-packages/object_detection/builders/dataset_builder.py:235: DatasetV1.map_with_legacy_function (from tensorflow.python.data.ops.dataset_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Dataset.map()
W0601 16:23:47.525164 139994178451264 deprecation.py:337] From /usr/local/lib/python3.10/dist-packages/object_detection/builders/dataset_builder.py:235: DatasetV1.map_with_legacy_function (from tensorflow.python.data.ops.dataset_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Dataset.map()
WARNING:tensorflow:From /usr/local/lib/python3.10/dist-packages/tensorflow/python/util/dispatch.py:1082: sparse_to_dense (from tensorflow.python.ops.sparse_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Create a `tf.sparse.SparseTensor` and use `tf.sparse.to_dense` instead.
W0601 16:23:54.089589 139994178451264 deprecation.py:337] From /usr/local/lib/python3.10/dist-packages/tensorflow/python/util/dispatch.py:1082: sparse_to_dense (from tensorflow.python.ops.sparse_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Create a `tf.sparse.SparseTensor` and use `tf.sparse.to_dense` instead.
WARNING:tensorflow:From /usr/local/lib/python3.10/dist-packages/tensorflow/python/util/dispatch.py:1082: sample_distorted_bounding_box (from tensorflow.python.ops.image_ops_impl) is deprecated and will be removed in a future version.
Instructions for updating:
`seed2` arg is deprecated.Use sample_distorted_bounding_box_v2 instead.
W0601 16:23:58.477204 139994178451264 deprecation.py:337] From /usr/local/lib/python3.10/dist-packages/tensorflow/python/util/dispatch.py:1082: sample_distorted_bounding_box (from tensorflow.python.ops.image_ops_impl) is deprecated and will be removed in a future version.
Instructions for updating:
`seed2` arg is deprecated.Use sample_distorted_bounding_box_v2 instead.
WARNING:tensorflow:From /usr/local/lib/python3.10/dist-packages/tensorflow/python/util/dispatch.py:1082: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.cast` instead.
W0601 16:24:00.966233 139994178451264 deprecation.py:337] From /usr/local/lib/python3.10/dist-packages/tensorflow/python/util/dispatch.py:1082: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.cast` instead.
.
Great job man, I love the work you've done ❤
I'm a complete beginner at this, I wanted to ask if it would be possible to add to the notebook on colab a way to save progress and then continue it in another session.
When I train the model I get up to 20000 steps but then I get disconnected from colab (having the free version), I was thinking if before I get disconnected I could stop the training and then later (when colab allows it again) continue it.
Nevernind i solved this, for everyone interested i chnaged pretty much all of the paths to save on google drive.
hello! would you mind if i can ask how to be able to save paths on gdrive ? thanks a lot
Hi, what I did was to replace the directories in the colab with “/mydrive/” so that everything is saved on google drive, for example instead of “!mkdir /content/images” I have “!mkdir /mydrive/images”. It is still quite risky because you are doing operations in your google drive. That's it !
I can't post my changes now because they are quite specific to my case (and I don't know if it always works), but this is how you can do it too.
@@0oMardev i hope you still can read my comment man, thanks for your tips but can you still provide more info? Thank youuu
@@0oMardev so when you continue to train it again, you just have to mount your gdrive and run the "Train" again?
Would it be accurate to say that this notebook uses "transfer learning", since it's using a pre-existing model (at 8:50)?
Yes, I think using the term "transfer learning" is accurate.😄
Hello! Really informative video, I just wanted to know the process stays or less the same even if im on a Mac while making the project right? Is there anything I should be aware of?
Furthermore its possible for me to make it so that the software detects when I am bending/bent towards the right/left and give an output, or because its the same person it wont be able to recognise that?
thanks!
Thanks! It should all work inside the web browser, even if you're on a mac. However, I still haven't written the guide for how to take the downloaded model and run it on macOS. You should take a stab at it though! In fact, I'd pay you $50USD if you can write a macOS deployment guide similar to the one I wrote for Windows (github.com/EdjeElectronics/TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi/blob/master/deploy_guides/Windows_TFLite_Guide.md). Email me if you're interested in that - info@ejtech.io !
For detecting which way you are bent, I would try using an OpenPose model (look it up on RUclips to see what I mean).
I got error at step 3.3 No such file or directory: 'images/validation_labels.csv'
Hi! I'm getting an error when training a custom model (Train Custom TFLite Detection Model) and a warning:
TensorFlow Addons (TFA) has ended development and introduction of new features. TFA has entered a minimal maintenance and release mode until a planned end of life in May 2024. Please modify downstream libraries to take dependencies from other repositories in our TensorFlow community (e.g. Keras, Keras-CV, and Keras-NLP).
Exactly, Ive been receiving this exact error and its eating my brain up
I know its really late to reply to you but did you find a solution to this ?
Great tutorial!! Great introductory experience while also providing everything needed for learning rabbit holes.
If you come across Step 5 ending with ^c, it is because you are running out of memory. Either resize your images to be large/medium (~200-300KB per image), upload less images, or both.
wow thanks man!
Thank You for your tutorial, it has been of great help to me but, I when upload the images folder and run it to distribute data for training, validation and test it show me zero images in total
Please guide me in this regard
Im having a problem with the training. it stops after a while with just a ^C. I looked at your common errors and they say to lower the batch size which i did. I went to a batch size of 2 and still end up with the same result.
I have the same problem.
when training the AI it shows an error that "TFA has entered a minimal maintenance and release mode until a planned end of life in May 2024". How do I solve this?
I hve the same problem and don't know how to fix it.
Hello Sir, Thanks for your great tutorial. I try to run the code you shared, but I found error while running at 10:37 part or in "set up training configuration part". it returns "NotFoundError: /content/labelmap.pbtxt; No such file or directory". What should I do to fix that sir?
You need to move this file to the "content" folder. This and some other files are generated in the root folder, instead of content folder.
@@emanuelrodrigues6453 thanks sir, I'll try it
Nice and clear step-by-step video!
However, I cannot run it smoothly at my side.
At section3.3, no labelmap.pbtxt is generated so the later step in section 4 crash. Would you know the reason ?
Yes for me too got any fix?
@@260.vighneshbablu2 it suddenly fixed after i reload the notebook and redo all the steps
@@260.vighneshbablu2 guess another solusion can be finding one .pbtxt, follow its formet and edit according to your custom class names.
@@humilam4808 but the format is not shown in the video
@@260.vighneshbablu2 find one online
Thankyou somuch for your helping with this video, now can implement this tutorial in my final task
That was so detailed and easy process! Thank you so much for the effort you have been given for all those videos. Quick question, what if somebody wants to use rstp streams instead of raspberry or USB camera on the Raspberry Pi, what should be change and where? Thanks again.
Why does the model only detect one class during the testing process even though the dataset has three classes?
Great video, can you share the link to the video on using the model with Coral ?
hello sir, why my labelmap.pbtxt is not appear? please help mee
excuse me sir, i want trouble notif "Corrupt JPEG data: 251 extraneous bytes before marker 0xd9" how to the fix this sir ? thaks you
Why does my learning process end just before the beginning of the analysis of images with "^C" in the last line?
i am also got this problem. I think we have this problem because memory was not enough. You can checked your dataset, especially size archive your dataset! If this workout, please response there
When I run model to test the installation, I get the following error, is there a fix for this??
Traceback (most recent call last):
File "/content/models/research/object_detection/builders/model_builder_tf2_test.py", line 21, in
import tensorflow.compat.v1 as tf
File "/usr/local/lib/python3.10/dist-packages/tensorflow/__init__.py", line 37, in
from tensorflow.python.tools import module_util as _module_util
File "/usr/local/lib/python3.10/dist-packages/tensorflow/python/__init__.py", line 37, in
from tensorflow.python.eager import context
File "/usr/local/lib/python3.10/dist-packages/tensorflow/python/eager/context.py", line 29, in
from tensorflow.core.framework import function_pb2
File "/usr/local/lib/python3.10/dist-packages/tensorflow/core/framework/function_pb2.py", line 16, in
from tensorflow.core.framework import attr_value_pb2 as tensorflow_dot_core_dot_framework_dot_attr__value__pb2
File "/usr/local/lib/python3.10/dist-packages/tensorflow/core/framework/attr_value_pb2.py", line 16, in
from tensorflow.core.framework import tensor_pb2 as tensorflow_dot_core_dot_framework_dot_tensor__pb2
File "/usr/local/lib/python3.10/dist-packages/tensorflow/core/framework/tensor_pb2.py", line 16, in
from tensorflow.core.framework import resource_handle_pb2 as tensorflow_dot_core_dot_framework_dot_resource__handle__pb2
File "/usr/local/lib/python3.10/dist-packages/tensorflow/core/framework/resource_handle_pb2.py", line 16, in
from tensorflow.core.framework import tensor_shape_pb2 as tensorflow_dot_core_dot_framework_dot_tensor__shape__pb2
File "/usr/local/lib/python3.10/dist-packages/tensorflow/core/framework/tensor_shape_pb2.py", line 36, in
_descriptor.FieldDescriptor(
File "/usr/local/lib/python3.10/dist-packages/google/protobuf/descriptor.py", line 553, in __new__
_message.Message._CheckCalledFromGeneratedFile()
TypeError: Descriptors cannot be created directly.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:
1. Downgrade the protobuf package to 3.20.x or lower.
2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).
More information: developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates
simply downgrade protobuf.
!pip uninstall -y protobuf
!pip install protobuf==3.20.1
@@harshitgoel1910 thanks! this worked for me
Really good tutorial, I am following along for my school project, I can get as far as to the training, then it won't start training in Colab, in the log, it shows that the TensorFlow Addons needs tf version 2.12 - 2.15.0, would that be the case? I can see your video using python 3.8, currently the colab is run on python 3.10, could that be the reason? Could you please help? Thank you
Wonderful, thanks for the efforts for explaining step by step
hello I want to ask when i was running the python script on 3.2 for splitting images into train, validation and test folders. I have this problem:
Total images: 0
Images moving to train: 0
Images moving to validation: 0
Images moving to test: 0
Looking forward to the response! Thank u!
Hmm... was your images folder named "images.zip"? Can you see any files in the "images/all" folder in the filemenu on the left side bar? What are the file extensions for your images (.jpg, .bmp, something else)?
@@EdjeElectronics oh the name was car images and i changed it and it got fixed, however the script doesnt take in jpeg file does it? Because when it was splitting I still had some file in the all file mostly jpeg format
@@eugenioverrelwong8733 Thanks! I forgot about the .jpeg file extension 😅, I'll change the code to support that too.
@@EdjeElectronics urwell, anyways i wanna ask so i basically wanted to make an automatic license plate reader using object detection for residential area and I sort of want it to read the letters and numbers in the license plate, if it was you would you prefer to train the model so it can read the letters or use OCR to detect the letters from the license plate? Thank you so much looking forward for your tips, tricks and answers
@@eugenioverrelwong8733 I would train a model to detect the license plates, and then crop out the detected plate and use OCR to read the letters. Actually, I just saw a blog post talking about this exact topic! Check it out here: blog.roboflow.com/how-to-crop-computer-vision-model-predictions/
the TensorFlow model is not working for Android application. if any suggestion please welcome
Hello, I've been running into issues at 8:25. I keep getting errors with python claiming that there's no such file or directory as 'libcuda.so.1'. Anyone understand this error???
thank you broo, you're amazing💣
Hi! I’m on the last step of the process I use pi camera v3 for my camera I do not know what’s causing this but the frame=frame1.copy says that Error:’NoneType’ object has no attribute ‘copy’. What can I do to fix this?
pre parehas tayo pre promise
I am having and issue with empty ".tfrecord" files: I have followed all steps and found workarounds for all bugs, but whenever I try and finish setting up the model to train, the train.tfrecord and val.tfrecord files are both empty and then I get stuck on "Use 'tf.cast' instead" during training. I have used two different labelling softwares for labelling my images, RectLabel and Labelimg, and have manually edited my xml files to look exactly as the xml files in the coin dataset, but for some reason its just not working. Any help is appreciated.
same problem did you fix it?
Did you fix it?
When running the model builder test file, this error occurs:
Traceback (most recent call last):
File "/content/models/research/object_detection/builders/model_builder_tf2_test.py", line 24, in
from object_detection.builders import model_builder
ModuleNotFoundError: No module named 'object_detection'
Nothing was changed in the code. I just executed the python code as it is in the Colab notebook.
Yep, others have been encountering this same problem lately. Please see this issue for how to fix it. I'm planning to implement this fix in the next couple days.
github.com/EdjeElectronics/TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi/issues/171
thank you soo much ,I got it figured@@EdjeElectronics
I have a problem on 3.2 Split images, I have uploaded my picture that I already convert into zip file. But then after I ran 3.2, it said that [images.zip]
End-of-central-directory signature not found. Either this file is not
a zipfile, or it constitutes one disk of a multi-part archive. In the
latter case the central directory and zipfile comment will be found on
the last disk(s) of this archive.
unzip: cannot find zipfile directory in one of images.zip or
images.zip.zip, and cannot find images.zip.ZIP, period.
Please help me. Thank you
ANSWER: if anyone face the same thing, make sure recheck your picture file, make sure it is in jpg @ JPG @ png @ bmp. Mine is JPeG so I have to convert it then could proceed with the project.
Great tutorial! I've been encountering a problem though with my dataset. All file names are correct, also the file extensions, and have tried all options of adding a dataset (upload, gdrive mounting, dropbox) I even checked the csv file and all are intact. However, when its time to create the tfrecord files, there appear to be some files missing in both the train and validation folders, thus I can't generate the pbtxt file. Any tips?
Thank you! Hmm, usually if there's an issue with the pbtxt file, it's because there's a problem with the annotation data (like a typo in a class name). Can you try using my coin dataset and see if you get the same issue?
Why it could not detect address file that has been built. For step 7 or 8,it says that no gouth file found. And after every step has been done, there is no images
Sigh, sorry guys. It looks like something changed in Colab or in TensorFlow, and now it's not able to run training on the GPU. I get several error messages like this when I try to run training in Step 5: " Could not load dynamic library 'libcudart.so.11.0' ". This means it's not able to load the CUDA libraries needed to run on the GPU. I'm not sure why. If anyone can figure out the problem, please post a solution. I'll dive into it after New Year's if no one has found anything by then.
EDIT: I'll send $50 USD to the first person who can figure out a reliable solution and post it. And I'd also be highly interested in hiring you for a paid internship at ejtech.io 😃
Up! Thank you!
Up!!! Thank you so much
I have the solution, however RUclips deletes my comment because the code contains a URL. Any workaround for this?
@@markwassef8643 cool! What’s your email? I’ll send you a message.
Or, you can contact me at info@ejtech.io .
Getting this error "NotFoundError: /content/labelmap.pbtxt; No such file or directory" at step 28, If I try set file locaton, does anyone had same issue, please someone help me with this
I have the same error, can someone please help
Hello.
Please, is there a way to change the label style (e.g., font and size) and the colour of the bounding boxes for each of the classes drawn on the test dataset (the detection results)?
Thanks
hi, is there any way to open the verbose so I'm able to track the epoch on each iteration and know the mAP score during specific epoch?
Nope, not really :( . You may want to look at YOLOv5 or YOLOv8, the tools for training those models will report the model mAP, precision, recall, etc on every training epoch.
@@EdjeElectronics I recently found that it (kinda) able to, I'll try to search it later
Can you do this with screenshots from a website to train it to identify certain images on a website?
I found an error when I am using my own dataset. It stops during training with this sign^C.
Hello Sadam!
I too I'm having the same issue... were you able to solve this issue? I would be really grateful for guidance. Thanks!
I solve this error by resize the photos.
can you explain how to add TFLite_Detection_PostProcess for this model
Thank you for the tutorial. I can't get past Step 2 where we run Model Builder test file. I get the error "ModuleNotFoundError: No module named 'object_detection'" This is after restarting the Colab notebook multiple times, and I found that it led to errors later down the road. Can you please help me?
Thank you.
Damn, I'm getting the same error too. Google must have changed something in the tensorflow repository. I'll dig into it more!
@@EdjeElectronics thank you so much! 🙏🙏🙏
@@EdjeElectronics we are having the same issue. We were in the middle of training and about 37000 steps in, we got a "runtime disconnected" error that killed it. We then got the same "ModuleNotFoundError" on Step 2.
Well, from what I can tell, it looks like the object_detection module isn't installing properly. The install command (!pip install /content/models/research/) errors and exits out when trying to build PyYAML. Still digging!
Okay, I think I figured it out! I added "!pip install pyyaml==5.3" before the "!pip install /content/models/research/" command in Step 2, and now the tf2_model_builder.py test works. I successfully started training a model. I'll update the notebook with this change.
Any chance you can update the instructions to show how to convert the model to tensorflow.js? I've followed the video and it all worked but running the model through tensorflowjs_converter results in a 16kb .bin file which is obviously not correct. Fantastic tutorial - thanks!!
I can look into it! Do you know where in the TensorFlow docs I can find more info about tensorflowjs_converter?
That'd be awesome. Thanks@@EdjeElectronics. Here's the docs that I've read - I "think" that your tutorial creates a "shared model" and then you convert from that but I might be wrong
@@mortocks Hi! Did you find a solution? can you help me, i need in .js too
Hi EJ Technology, does this tutorial still work with the current version of Python (3.11.11) on Google Colab? Some libraries are no longer supported in version 3.11.11
python 3.11.11 doesn't support pyyaml 5.3.1 anymore
Is there any advice with big dataset? it will took more than 2 hours which will consume GPU usage and in the end it will cause google colab to stop due to GPU limit
I did face the same issue! Do you had resolve the issue?
Do you need to upgrade to Colab Pro for 100 compute units to complete the training? I trained for 1.5 hours but I did not see any progress on the Tensor Dashboard. My dataset consists of 200 images with objects labelled and XML files and only one class object, I used the default "ssd-mobilenet-v2-fpnlite-320" tensor model with a batch size is 16 images and num steps is 40,000 steps. I left my dataset training overnight, but it timed out. Now I have 0 compute unit so I can not connect to the backend GPU. Google does not see when I will get free compute unit again but suggested that I either upgrade to Colab Pro or Google Cloud Platform marketplace.
I tried on my own flower dataset. While calculating mAP it's taking only 10 images and finishing inferencing. My test folder have 61 images and it's showing error of no file existing of remaining 51 images.
Really thank you very much for the tutorial I was able to do the inference on a Raspberry Pi. However, just after the entire model I evaluated the precision of the model on the training and test data but the script that I I wrote to myself about errors since the images were saved in tfrecord format. Could I have a hand lesson?
Last time I used this notebook and it worked perfectly , but now I am getting the " no object_detection module " error
Thanks for the great content. I have some doubts, the video displaying is running at 2.7 FPS with Raspberry Pi, with you it reaches about 5 FPS, why my one is slower?
Hey bro, were you able to complete the custom object detection... I had an issue during the training process whereby it will stop after 50s. Usually it should take several hours. I also noticed that my system Ram reached about 12.1Gb before it stopped... did yours do the same? I would be really grateful for some guidance...
Hi did you figure it out...can you share your experience?@@PraiseTheLord527
can this model fit in esp 32 module?
i get errors in STEP # Create CSV data files and TFRecord files - is there any change? ERROR: FileNotFoundError: [Errno 2] No such file or directory: 'images/validation_labels.csv'
i think the .py script is buggy there? can you help? ;)
It may happen that the create_csv.py script is located in content/models/mymodel/. It happened to me too. But it should be in content/. Move it or adjust the paths in the script.
3.2 step has a problem.
I tried renaming my file/dataset to "images" but it just keeps giving me the error:
mkdir: cannot create directory ‘/content/images’: File exists
[images.zip]
End-of-central-directory signature not found. Either this file is not
a zipfile, or it constitutes one disk of a multi-part archive. In the
latter case the central directory and zipfile comment will be found on
the last disk(s) of this archive.
unzip: cannot find zipfile directory in one of images.zip or
images.zip.zip, and cannot find images.zip.ZIP, period.
mkdir: cannot create directory ‘/content/images/train’: File exists
mkdir: cannot create directory ‘/content/images/validation’: File exists
mkdir: cannot create directory ‘/content/images/test’: File exists
Even if I did see the images folder, they were empty
I made sure to send a ZIP file(or maybe my winrar is lying to me since the file i dragged is WINRAR ZIP FILE)
Awesome vid! But it didn't work. Keep getting a ^c when I tried to run my training. SO each time I reflashed the window, it just showed 3 pictures from my training folder?
I've seen that happen sometimes when there's problems with the training data. Can you try it with my coin dataset and see if you get the same issue?
Waow, thanks for your quick response sir! Hmm, yes turn out to work just fine with your dataset. I followed the video quite close, and I can´t really see the issue? I was uploading from google Drive? @@EdjeElectronics
@Edje Electronics - I can see now, that after I ran your 3.2 split images into test, validation, and train folders. My test folder is empty, and the same goes for my validations folder. My train folder holds now 11 pictures? In the folder with all images, I have 184 images for .jpg and 184 for XML. So it seems like there is an issue with the script.
The ^C could be coming from a RAM overload. If thats the case, reduce your image sizes.
@@jako3939 Do you have any suggestion to reduce quickly the size of the images without relabeling the images?
what is inside the contents/training/train folder it is not generated in my case, getting problems while loading the tensorboard
how can I train EfficientDet on a custom data, since it it much faster I guess? correct me if am wrong.
The best way to train EfficentDet is to use the TFLite Model Maker (just search "tflite model maker object detection" on Google). It isn't much faster though. You can check the FPS it gets in comparison to other models at my blog post here: www.ejtech.io/learn/tflite-object-detection-model-comparison
is it available to apply in android studio? for object detection?
I got an issue with not being able to connect to GPU Backend for the google collab session will that affect the object detection model?
I have a recurring problem when beginning the training, it will suddenly stop with the C^ as the last thing outputted? I'm not pressing anything or clicking stop?
I ran into a problem at step 4, to be precise before the warning warning on Install TensorFlow Object Detection Dependencies. there is an error like this
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behavior is the source of the following dependency conflicts.
numba 0.56.4 requires numpy=1.18, but you have numpy 1.24.3 which is incompatible.
what's the solution?
You can just ignore that error, the rest of the steps of the notebook should work anyway.
hello how can i get the evaluation phase like the confusion matrix and accuracy , percision and recall??
Good afternoon, great contribution that you really make, congratulations, a question when doing the training is paralyzed and does not continue on API in Colab, what could happen?
Sorry, I haven't encountered that problem before. My guess is to restart the Colab from scratch and try to run through it again. Otherwise, I'm not sure how to solve it.
can this code be adapted to not only identify a coin but to recognise the date and images from a dual camera setup? I want to identify collectible coins as they pass by the cameras then either accept or reject them.....
For some reason, the training section it would only do it for like 10-15 minutes and then the execution is complete. I'm not sure what I have done wromg but i have around 200 labelled photos in the images.zip. Could i have missed a step?
edit: and also it's not showing the graphs when i refresh the tensorboard
I am novice in AI objects learning, but it seems me this video will help me. Here is good text and description, but video resolution is rather poor, it is difficult to watch what is on author' s monitor. Meanwhile thank you for description and links.