Issue with onnx model
See original GitHub issueHi, I used pgnet inference model and got nice results. Then I tried to convert it to onnx model using paddle2onnx and successfully converted. But I couldn’t produce the results using that onnx model. I’m facing the below issue:-
(1, 3, 300, 300)
NodeArg(name='x', type='tensor(float)', shape=[None, 3, None, None])
---------------------------------------------------------------------------
RuntimeException Traceback (most recent call last)
<ipython-input-21-90e728a110ab> in <module>()
17 print(ort_sess.get_inputs()[0])
18 ort_inputs = {ort_sess.get_inputs()[0].name: x}
---> 19 ort_outs = ort_sess.run(None, ort_inputs)
20 print(ort_outs)
21 print("Exported model has been predict by ONNXRuntime!")
/usr/local/lib/python3.7/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py in run(self, output_names, input_feed, run_options)
186 output_names = [output.name for output in self._outputs_meta]
187 try:
--> 188 return self._sess.run(output_names, input_feed, run_options)
189 except C.EPFail as err:
190 if self._enable_fallback:
RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Add node. Name:'Add_30' Status Message: /onnxruntime_src/onnxruntime/core/providers/cpu/math/element_wise_ops.h:497 void onnxruntime::BroadcastIterator::Init(ptrdiff_t, ptrdiff_t) axis == 1 || axis == largest was false. Attempting to broadcast an axis by a dimension other than 1. 19 by 20
where (1, 3, 300, 300) is my input to the onnx model and [None, 3, None, None] is expected by the model. Means there is no issue with the input. Could you please share any view on this issue? Thanks.
Issue Analytics
- State:
- Created 2 years ago
- Comments:6
Top Results From Across the Web
Issues · onnx/onnx - GitHub
Open standard for machine learning interoperability - Issues · onnx/onnx. ... Onnx model not inferring tensor sizes with operators like resize/reshape ...
Read more >torch.onnx — PyTorch 1.13 documentation
The torch.onnx module can export PyTorch models to ONNX. The model can then be consumed by any of the many runtimes that support...
Read more >Issues when switching to float — sklearn-onnx 1.11.2 ...
Most models in scikit-learn do computation with double, not float. Most models in deep learning use float because that's the most common situation...
Read more >ONNX models: Optimize inference - Azure Machine Learning
Learn how using the Open Neural Network Exchange (ONNX) can help optimize the inference of your machine learning model.
Read more >onnx - PyPI
ONNX provides an open source format for AI models, both deep learning and ... We encourage you to open Issues, or use Slack...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
@ujjawalcse It seems like there’s some problems with Paddle2ONNX, could you upload your PGNet PaddlePaddle model(saved as inference model format) and converted ONNX model here?
You can try input size as 256 * 256, or 512 * 512.