Next would be nice if there will be an example in Java, because there is no working example of ONNX runtime in Java language. Thanks for C++ example! :)
Why the confidence scores in this example are not in the range of 0-1? I have also followed the same process for my custom cnn model and the scores are not in the range of 0 to 1.
STILL trying to figure out where "results" was defined and assigned, should you not be trying to print out the contents of "outputTensor" instead after a session "Run"?
Hey Mikail! "results" is defined on line 61: array results; It is pointed to in the `outputTensor` on line 66: auto outputTensor = Ort::Value::CreateTensor(memory_info, results.data(), results.size(), outputShape.data(), outputShape.size()); Here is more on how "CreateTensor" works from our c++ docs onnxruntime.ai/docs/api/c/struct_ort_1_1_value.html#ab19ff01babfb77660ed022c4057046ef
I created my model in .pkl file but I created a Python script to convert it to ONNX...Im trying to get it loaded in the Visual Studio project but I cant should I just retrain the model in ONNX all around?
I'm also trying to enable CUDA for GPU usage. I ran your example. When it calls OrtStatus* onnx_status = g_ort->SessionOptionsAppendExecutionProvider_CUDA(session_options, &o); It returns error "CUDA is not available". I use Windows10, Visual Studio 2022. My GPU is NVIDIA RTX A2000. I installed latest CUDA Toolkit V12.1 and cuDNN is compatible version with CUDA. Environment variables have been set properly. The version of onnxruntime is latest, onnxruntime-win-x64-gpu-1.14.1. Could you advise what should i do? Thanks in advance.
1. Can you confirm you completed all the install steps including updated the drivers for your card? 2. ORT 1.14 is built with CUDA version 11.6. Can you install 11.6 and see if that resolves the issue? Install CUDA for windows steps: docs.nvidia.com/deeplearning/cudnn/install-guide/index.html#install-windows
@@ONNXRuntime Thanks very much. Q1, Yes. Q2, it succeeds after my installation of CUDA Toolkit V11.6, and degrading onnxruntime to onnxruntime-win-x64-gpu-1.13.1. If i use CUDA Toolkit V11.6, and onnxruntime-win-x64-gpu-1.14.1, it throws an exception when calling SessionOptionsAppendExecutionProvider_CUDA. It says "Unhandled exception at 0x00007FFD3AAC286E (ucrtbase.dll) in fns_candy_style_transfer.exe: Fatal program exit requested." Does that mean this is a compatibility issue? I can't use CUDA Toolkit above V11.6? Why it throws exception with latest onnx?
I tested 1.14 ort with CUDA 11.6 and it works. I updated the sample project for the new version of ORT. Can you try grabbing the most recent updates on the sample project and see if you get the same error?
I followed the video exactly and i got stuck at 1:05 with the error "cannot open source file "opencv2/highgui/highgui.hpp" " I have installed the nuget packages and added the header file "imgproc.hpp" in the include paths properties of the project. The file "highgui.hpp" was not even in the packages. Also "imgproc.hpp" is not found even after adding the path. I also downloaded the full example from Github. Also there is the error that the header files of openCV are not found. Do you have any ideas or does the example in the video not work anymore? (Visual Studio 2019)
I was able to recreate this issue. It is an issue with the OpenCV nuget package I was using that previously was not an issue. I updated to use this nuget package and it resolved the issue: www.nuget.org/packages/opencv4.2/ I will update the sample project to reflect this change. Thanks for letting me know!!
Does not work for me. I got tons of errors when I follow this tutorial. Not sure if it's because my machine is win11 and with community edition of visual studio
@@ONNXRuntime Thanks for your timely reply. Definitely appreciate any help from you. I fixed some of the errors which were due to the system configurations. There are still some left. One of them is "The ordinal 3 could not be located in the dynamic link library C:Users\yijir\Downloads\cpp-onnxruntime-resnet-console-app-main\x64\Debug\OnnxRuntimeResNet.exe". Hope to get help from you. Thanks
Quick update. I just fixed the bug and got the results as your did. Thanks for creating this great tutorial. Below is what I got from the console. Hwllo 1: golden_retriever 11.9425 2: Tibetan_terrier 10.291 3: otterhound 8.63227 4: Tibetan_mastiff 8.61618 5: kuvasz 8.55493 C:\Users\yijir\Downloads\cpp-onnxruntime-resnet-console-app-main\x64\Debug\OnnxRuntimeResNet.exe (process 16600) exited with code 0. To automatically close the console when debugging stops, enable Tools->Options->Debugging->Automatically close the console when debugging stops. Press any key to close this window . . .
Wow! This is a big help, thanks so much!
Next would be nice if there will be an example in Java, because there is no working example of ONNX runtime in Java language. Thanks for C++ example! :)
this is a big help for a primer
Glad it was helpful! 😊
On line no 85 how did you get variable inputNames?
Why the confidence scores in this example are not in the range of 0-1? I have also followed the same process for my custom cnn model and the scores are not in the range of 0 to 1.
Is there no sigmoid layer or softmax layer included in your ONNX model's output?
Hii, the function GetOutputName() seems deprecated, GetOutputNameAllocated () is the new function but it returns a pointer. Help?
Got it 🤝 if anybody have any doubt feel free to ping me
@@AagneyWest Hi, can you help me in this, I have the same problem and GetOutputName() not workinig
@@savasatarzade1086 yeah sure, i will get back to you in 12hours. I am in the middle of something.
@@savasatarzade1086 i can share the code if you comment your email or any other contact.
Thanks for pointing out this issue! I have updated the sample project to fix this!
STILL trying to figure out where "results" was defined and assigned, should you not be trying to print out the contents of "outputTensor" instead after a session "Run"?
Hey Mikail! "results" is defined on line 61:
array results;
It is pointed to in the `outputTensor` on line 66:
auto outputTensor = Ort::Value::CreateTensor(memory_info, results.data(), results.size(), outputShape.data(), outputShape.size());
Here is more on how "CreateTensor" works from our c++ docs onnxruntime.ai/docs/api/c/struct_ort_1_1_value.html#ab19ff01babfb77660ed022c4057046ef
I created my model in .pkl file but I created a Python script to convert it to ONNX...Im trying to get it loaded in the Visual Studio project but I cant should I just retrain the model in ONNX all around?
What framework did you use to train the model? You should export to ONNX format from your training framework.
I'm also trying to enable CUDA for GPU usage. I ran your example. When it calls OrtStatus* onnx_status = g_ort->SessionOptionsAppendExecutionProvider_CUDA(session_options, &o); It returns error "CUDA is not available". I use Windows10, Visual Studio 2022. My GPU is NVIDIA RTX A2000. I installed latest CUDA Toolkit V12.1 and cuDNN is compatible version with CUDA. Environment variables have been set properly. The version of onnxruntime is latest, onnxruntime-win-x64-gpu-1.14.1. Could you advise what should i do? Thanks in advance.
1. Can you confirm you completed all the install steps including updated the drivers for your card?
2. ORT 1.14 is built with CUDA version 11.6. Can you install 11.6 and see if that resolves the issue?
Install CUDA for windows steps: docs.nvidia.com/deeplearning/cudnn/install-guide/index.html#install-windows
@@ONNXRuntime Thanks very much. Q1, Yes. Q2, it succeeds after my installation of CUDA Toolkit V11.6, and degrading onnxruntime to onnxruntime-win-x64-gpu-1.13.1.
If i use CUDA Toolkit V11.6, and onnxruntime-win-x64-gpu-1.14.1, it throws an exception when calling SessionOptionsAppendExecutionProvider_CUDA. It says "Unhandled exception at 0x00007FFD3AAC286E (ucrtbase.dll) in fns_candy_style_transfer.exe: Fatal program exit requested."
Does that mean this is a compatibility issue? I can't use CUDA Toolkit above V11.6? Why it throws exception with latest onnx?
I tested 1.14 ort with CUDA 11.6 and it works. I updated the sample project for the new version of ORT. Can you try grabbing the most recent updates on the sample project and see if you get the same error?
I followed the video exactly and i got stuck at 1:05 with the error "cannot open source file "opencv2/highgui/highgui.hpp" " I have installed the nuget packages and added the header file "imgproc.hpp" in the include paths properties of the project. The file "highgui.hpp" was not even in the packages. Also "imgproc.hpp" is not found even after adding the path. I also downloaded the full example from Github. Also there is the error that the header files of openCV are not found. Do you have any ideas or does the example in the video not work anymore? (Visual Studio 2019)
I have the same problem too! I run in Visual studio 2019 and I installed the packages through nuget
I was able to recreate this issue. It is an issue with the OpenCV nuget package I was using that previously was not an issue. I updated to use this nuget package and it resolved the issue: www.nuget.org/packages/opencv4.2/
I will update the sample project to reflect this change. Thanks for letting me know!!
Does not work for me. I got tons of errors when I follow this tutorial. Not sure if it's because my machine is win11 and with community edition of visual studio
This project was built on win 11 with visual studio. Can you post the error?
@@ONNXRuntime Thanks for your timely reply. Definitely appreciate any help from you. I fixed some of the errors which were due to the system configurations. There are still some left. One of them is "The ordinal 3 could not be located in the dynamic link library C:Users\yijir\Downloads\cpp-onnxruntime-resnet-console-app-main\x64\Debug\OnnxRuntimeResNet.exe". Hope to get help from you. Thanks
Quick update. I just fixed the bug and got the results as your did. Thanks for creating this great tutorial. Below is what I got from the console.
Hwllo 1: golden_retriever 11.9425
2: Tibetan_terrier 10.291
3: otterhound 8.63227
4: Tibetan_mastiff 8.61618
5: kuvasz 8.55493
C:\Users\yijir\Downloads\cpp-onnxruntime-resnet-console-app-main\x64\Debug\OnnxRuntimeResNet.exe (process 16600) exited with code 0.
To automatically close the console when debugging stops, enable Tools->Options->Debugging->Automatically close the console when debugging stops.
Press any key to close this window . . .
@@JirongYi Can u tell please how did u fix ordinal 3 error?
This doesn't seem to work on VS 2022, can anyone else confirm?
Did you include .Net Core 3.1 in the VS2022 installation?
@@ONNXRuntime Yes
@@ONNXRuntime For some ultra weird reason this does not work in VS2022; #include but this works completely fine #include "onnxruntime_cxx_api.h"
I updated the sample in GitHub to VS2022 and it works with and "". Please grab the new source and let me know if you still have an issue! :)
Ive ran into that issue too which version do you have
Outdated