-
Notifications
You must be signed in to change notification settings - Fork 2.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Problem with loading converted onnx model #1229
Comments
Based on this error - Graph input with name i__19 is not associated with a node. - it looks like the converted model doesn't have any input by the name 'i__19'. You should take a look at the inputs in the onnx graph and use them for inference. Can you upload the converted onnx model here? |
Just curious @antonfr - did you try an inferencing run with 0.4.0 ? From the below, I see a bunch of warnings (not errors), so I wonder if the model actually loaded correctly.
As for the noisy "warnings", this should be addressed by #1235. Sometimes there could be some residual superfluous information in the converted model and the runtime might just be complaining about this. This should not affect model loading and inference run itself. |
@pranavsharma here is my model |
Hi @antonfr, Thanks for sharing the model. I think the model loads fine. I hit issues while performing the inference run. Here are some noteworthy points -
So I followed the lead from your earlier snapshot where width = height = 300 and fed it random numpy data of type uint8 of shape [1, 300, 300, 3]
So either the conversion had an issue or the input shape is still not right. if the input shape is not right, please correct the shape below and give it a shot. If after correcting it, the model still doesn't run, it might be a conversion issue. So you can follow up with the converter tool owners. This is the python script I used to load and invoke the model - import onnxruntime as rt sess = rt.InferenceSession("ssd_mobilenet.onnx") input = np.ndarray(shape=(1, 300, 300, 3), dtype='uint8') pred_onnx = sess.run(None, {input_name: input})[0] |
This will possibly be fixed by #1233 that just got checked in. There are NonZero nodes earlier in the graph that provide an iteration count to a Loop node. If there are no matches the iteration count is zero. The shape of some of the Loop outputs wasn't correct in that case, leading to Gather breaking later on. This occurring is dependent on the input though, so it won't necessarily crash every time. Longer term it would be nicer if the model had a shortcut path when NonZero returns no matches, but that's a question for the converter team as to whether that's achievable. |
Thanks @skottmckay. I built a python wheel including #1233 and the model didn't crash and finished its run successfully. @antonfr - could you try building from source and checking if the results look okay (I only validated that the crash was resolved, still need to validate results) ? |
@hariharans29 Unfortunately I met problems with installing onnxruntime from source, though I have followed instructions:
|
@snnn and @pranavsharma - any idea what's the issue ? |
I don't usually run the optional steps. It should work without that. |
@antonfr - what's the exact build error you get ? |
I just built this and it works just fine. The only change I had to make was to comment out the running of onnx_backend_test_series.py inside build.py. I ran it like this: |
@hariharans29 here is the full log in zip archive @pranavsharma tried to use your solution, unfortunately, result is the same. |
Additionally, when I'm trying |
Are you using I don't think this is supported according to the OS/Compiler support matrix here - https://github.com/microsoft/onnxruntime/blob/master/BUILD.md (@pranavsharma can correct me if I am wrong) |
@hariharans29 yes, I'm using clang compiler on macOS, any ideas, if I could build onnxruntime from source and if yes, how could I do it? |
Yes, the steps to build onnxruntime from source are documented here - https://github.com/microsoft/onnxruntime/blob/master/BUILD.md. Can you please check if you are missing something from the steps ? |
@HariharanS I have described in details all steps above #1229 (comment) |
Hi @antonfr, This must be a local dev environment issue and probably not a build issue itself as Mac builds are run on a daily basis (and a per PR basis) and things look fine. However, we will try building on a Mac and get back to you. Thanks |
Thanks @hariharans29, waiting for results. From my side, I can provide you all information about my environment, that you might consider necessary. |
@antonfr -- can you do the following steps, and skip the optional steps listed above? The optional steps may be confusing the issue. 1> remove any installation of protobuf
Can you update the thread if the steps above fail? |
Thanks @jignparm Also, the errors in your logs are similar to the one raised midway of #648 and the resolution was try building first without |
BTW, We'll remove the onnxruntime_USE_OPENMP option and keep it always off. |
@jignparm thanks a lot, remove --use_openmp and comment out onnx_backend_test_series.py works fine! |
One more question, how to add built from source onnxruntime to existing project in Visual Studio? I use project -> add nuget packages ->configure sources -> add. What folder should I select? |
To add a "built from source onnxruntime" to an existing project in Visual Studio, you need to first generate a NuGet package, which includes the runtimes you need (.dll, .so or .dylib files). If you build locally from master branch on a Windows operating system, you'll only get .dll files for Windows. If that is good enough for you, then you can run msbuild command below to generate a .nupkg file. The NuGet package includes the C# assemblies, so you need to build the C# projects as well as the native C++ projecst. Simply add the --build_csharp flag (e.g. "./build.sh --config RelWithDebInfo --build_wheel --build_csharp" ) to the build command. This creates the NuGet package from source that you can add to Visual Studio.
The package on NuGet.org contains runtimes for all three operating systems (Windows, Linux Ubuntu flavor, and MacOS), in case you need a package that needs to run in multiple environments. |
@jignparm with --build_csharp flag I got following errors (see log_file for full log) |
From the log files, it seems like some Mono header files are getting pulled into the build, even for the Native C++ library build, when you use the --build_csharp flag on MacOS. The C# dlls are usually compiled on Windows in our build systems, since they are cross-platform.
Since you are able to build the native dylib library successfully without the --build_csharp flag, another option is to build the C# project independently. Drop the --build_csharp flag from the command line, and build the C# project using dotnet build OnnxRuntime.CSharp.sln. It relies on the OnnxruntimeBuildDirectory environment variable here to point to the root directory of the native C++ build. Set that variable to point to the root of the build directory before running dotnet build command. A second (simpler?) option is to use a pre-existing NuGet package, rename it with .zip extension, unzip the contents, replace the native library at runtimes/osx-64/native/libonnxruntime.dylib (and optionally the C# library at lib/netstandard1.1/Microsoft.ML.OnnxRuntime.dll in case there are changes in the C# code base). You should be able to re-zip these files into a NuGet package for debugging/development purposes. |
Thanks for your reply @jignparm. As for the first option, path to my onnxruntime library is /Users/anton Is it the root to my build directory? |
To generate a dylib file, add the --build_shared flag to the build script. It'll generate the dylib in the location below. The build configuration in this case is RelWithDebInfo, but other options could be Debug or Release. The build directory root, in the example path below, would be /onnxruntime/build. If building C# independently, set OnnxRuntimeBuildDirectory to this value before starting the dotnet build. Example path to dylib file:
|
@antonfr -- are you able to build from source and load the model successfully now, as @hariharans29 was able to do? |
Closing this for now. @antonfr - Please reopen in case you have more issues / require further clarifications. Thanks! |
Describe the bug
I have converted to onnx ssdlite_mobilenet_v2_coco model from tensorflow detection model zoo (could be found here). Now I'm trying to load the model using ML.NET and get an error
System information
To Reproduce
` public struct ImageSettings
{
public const int ImageWidth = 300;
public const int ImageHeight = 300;
public const bool ChannelLast = true;
}
Expected behavior
Model should be loaded correctly
Screenshots
Additional context
With OnnxRuntime 0.4.0 I got
2019-06-12 14:32:53.802528 [W:onnxruntime:InferenceSession, session_state_initializer.cc:502 SaveInputOutputNamesToNodeMapping] Graph input with name i__19 is not associated with a node. 2019-06-12 14:32:53.802581 [W:onnxruntime:InferenceSession, session_state_initializer.cc:502 SaveInputOutputNamesToNodeMapping] Graph input with name cond__21 is not associated with a node. 2019-06-12 14:32:54.004760 [W:onnxruntime:InferenceSession, session_state_initializer.cc:502 SaveInputOutputNamesToNodeMapping] Graph input with name i__42 is not associated with a node. 2019-06-12 14:32:54.004790 [W:onnxruntime:InferenceSession, session_state_initializer.cc:502 SaveInputOutputNamesToNodeMapping] Graph input with name cond__44 is not associated with a node. 2019-06-12 14:32:54.005072 [W:onnxruntime:InferenceSession, session_state_initializer.cc:502 SaveInputOutputNamesToNodeMapping] Graph input with name i is not associated with a node. Onnx type not supported
With earlier versions I got
Error initializing model :Microsoft.ML.OnnxRuntime.OnnxRuntimeException: [ErrorCode:InvalidGraph] Load model from /my/path/to/file/ssd_mobilenet.onnx failed:Node:Preprocessor/map/strided_slice Node (Preprocessor/map/strided_slice) has input size 4 not in range [min=1, max=1]. at Microsoft.ML.OnnxRuntime.InferenceSession..ctor(String modelPath, SessionOptions options) in C:\agent\_work\6\s\csharp\src\Microsoft.ML.OnnxRuntime\InferenceSession.cs:line 83 at Microsoft.ML.OnnxRuntime.InferenceSession..ctor(String modelPath) in C:\agent\_work\6\s\csharp\src\Microsoft.ML.OnnxRuntime\InferenceSession.cs:line 31 at Microsoft.ML.Transforms.Onnx.OnnxModel..ctor(String modelFile, Nullable 1 gpuDeviceId, Boolean fallbackToCpu) at Microsoft.ML.Transforms.Onnx.OnnxTransformer..ctor(IHostEnvironment env, Options options, Byte[] modelBytes)
The text was updated successfully, but these errors were encountered: